sentences
sequence | labels
sequence |
---|---|
[
"The emotional state of a speaker can be influenced by many different factors in dialogues, such as dialogue scene, dialogue topic, and interlocutor stimulus.",
"The currently available data resources to support such multimodal affective analysis in dialogues are however limited in scale and diversity.",
"In this work, we propose a M ulti-modal M ulti-scene M ulti-label E motional D ialogue dataset, M 3 ED , which contains 990 dyadic emotional dialogues from 56 different TV series, a total of 9,082 turns and 24,449 utterances.",
"M 3 ED is annotated with 7 emotion categories (happy, surprise, sad, disgust, anger, fear, and neutral) at utterance level, and encompasses acoustic, visual, and textual modalities.",
"To the best of our knowledge, M 3 ED is the first multimodal emotional dialogue dataset in Chinese.",
"It is valuable for cross-culture emotion analysis and recognition.",
"We apply several state-of-the-art methods on the M 3 ED dataset to verify the validity and quality of the dataset.",
"We also propose a general Multimodal Dialogue-aware Interaction framework, MDI, to model the dialogue context for emotion recognition, which achieves comparable performance to the state-of-the-art methods on the M 3 ED.",
"The full dataset and codes are available 1 .",
"Emotion Recognition in Conversation (ERC) aims to automatically identify and track the emotional status of speakers during a dialogue (Poria et al., 2019b).",
"It is a crucial component to improve natural human-computer interactions and has a wide range of applications in interaction scenarios, including call-center dialogue systems (Danieli et al., 2015), conversational agents (Fragopana-gos and Taylor, 2005) and mental health diagnoses (Ringeval et al., 2018), etc.",
"Different from traditional multimodal emotion recognition on isolated * Corresponding Author 1 https://github.com/AIM3-RUC/RUCM3ED (Sad) !\"#$%&'( Xiaoan, I am sorry (Surprise)",
"utterances, multimodal ERC is a more challenging problem, because there are many influencing factors that affect the speakers' emotional state in a dialogue, including the dialogue context from multi-modalities, the scene, the topic, and even the personality of subjects, etc. (Poria et al., 2019b; Scherer, 2005; Koval et al., 2015). It has been proved in recent works (Majumder et al., 2019; Ghosal et al., 2019; Hu et al., 2021; Shen et al., 2020) that contextual information plays an important role in ERC tasks and brings significant improvements over baselines that only consider isolated utterances. DialogueRNN (Majumder et al., 2019) uses recurrent networks to model global and speaker-specific temporal-context information. DialogueGCN (Ghosal et al., 2019) and MMGCN (Hu et al., 2021) use graph-based networks to capture conversational dependencies between utterances in dialogues. DialogXL (Shen et al., 2020)",
"applies a strong pre-trained language model XLNet (Yang et al., 2019) to ERC and proposes a dialog-aware self-attention method for modeling the context information. The IEMOCAP (Busso et al., 2008) and MELD (Poria et al., 2019a) are two multimodal emotional dialogue benchmark datasets, which are widely used in the above-mentioned works and promote research in the affective computing field. However, both of them are limited in size and diversity. The videos in MELD are collected only from the Friends TV series, and the videos in IEMOCAP are recorded in laboratory environments from ten actors performing scripted and spontaneous dialogues. These limitations not only affect the investigation of generalization and robustness of the models, but also limit the exploration of other important influencing factors in dialogues, such as dialogue scene, dialogue topic, emotional influence from interlocutors, and so on.",
"In this work, we construct a large-scale Multimodal Multi-scene and Multi-label Emotional Dialogue dataset, M 3 ED , which consists of 990 emotional dyadic dialogue video clips from 56 different TV series (about 500 episodes), ensuring that there are various dialogue scenes and topics. We also consider the blended annotations of emotions, which are commonly observed in real-life human interactions (Devillers et al., 2005; Vidrascu and Devillers, 2005). M 3 ED contains 24449 utterances in total, which are more than three times larger than IEMOCAP and almost two times larger than MELD. There are rich emotional interaction phenomena in M 3 ED dialogues, for example, 5,396 and 2,696 inter-turn emotion-shift and emotion-inertia scenarios respectively, and 2,879 and 10,891 intra-turn emotion-shift and emotion-inertia scenarios respectively. To the best of our knowledge, M 3 ED is the first large-scale multi-modal emotional dialogue dataset in Chinese, which can promote research of affective computing for the Chinese language. It is also a valuable addition for cross-cultural emotion analysis and recognition.",
"We further perform the sanity check of the dataset quality. Specifically, we evaluate our proposed M 3 ED dataset on several state-of-the-art approaches, including DialogueRNN, DialogueGCN, and MMGCN. The experimental results show that both context information and multiple modalities can help model the speakers' emotional states and significantly improve the recognition performance, in which context information and multiple",
"multiple modalities are two salient factors of a multimodal emotion dialogue dataset. Furthermore, motivated by the masking strategies of self-attention used in DialogXL (Shen et al., 2020), we propose a general Multimodal Dialogue-aware Interaction (MDI) framework which considers multimodal fusion, global-local context modeling, and speaker interactions modeling and achieves state-of-the-art performance.",
"All in all, M 3 ED is a large, diverse, high-quality, and comprehensive multimodal emotional dialogue dataset, which can support more explorations in the related research directions, such as multi-label learning, interpretability of emotional changes in dialogues, cross-culture emotion recognition, etc. The main contributions of this work are as follows:",
"We build a large-scale Multi-modal Multi-scene and Multi-label Emotional Dialogue dataset called M 3 ED, which can support more explorations in the affective computing field. We perform a comprehensive sanity check of the dataset quality by running several state-of-the-art approaches on M 3 ED and the experimental results prove the validity and quality of the dataset. We propose a general Multimodal Dialogue-aware Interaction framework, MDI, which involves multimodal fusion, global-local context and speaker interaction modeling, and it achieves comparable performance to other state-of-the-art approaches.",
"Table 1 summarizes some of the most important emotion datasets related to this work. The EmoryNLP (Zahiri and Choi, 2018), EmotionLines (Chen et al., 2018), and DailyDialog (Li et al., 2017) are emotional dialogue datasets in only text modality, which have been widely used in the ERC tasks. The CMU-MOSEI (Zadeh et al., 2018), AFEW (Dhall et al., 2012), MEC (Li et al., 2018), and CH-SIMS (Yu et al., 2020) contain multiple modalities and have been wildly used for multimodal emotion recognition, but they are not conversational and can not support explorations of dialogue emotional analysis. The IEMOCAP (Busso et al., 2008), MSP-IMPROV (Busso et al., 2016) and MELD (Poria et al., 2019a) are",
"the currently available multimodal emotional dialogue datasets. The IEMOCAP and MSP-IMPROV datasets are recorded from ten/twelve actors performing scripted and spontaneous dyadic dialogues, and each utterance is manually labeled with discrete emotion categories. The MELD (Poria et al., 2019a) is a multi-modal multi-party emotional dialogue dataset extended from the text-based EmotionLines dataset (Chen et al., 2018), which is derived only from the Friends TV series.",
"Previous works on ERC focus on modeling context information in a conversation with different frameworks. BC-LSTM (Poria et al., 2017) employs a Bi-directional LSTM to capture temporal-context information in conversations. CMN (Haz-arika et al., 2018b) and ICON (Hazarika et al., 2018a) use distinct GRUs to model the global and speaker-specific temporal-context, and apply memory networks to model speaker emotional states. DialogueRNN (Majumder et al., 2019) uses distinct GRUs to model global and speaker-specific temporal-context, and global emotional states tracking respectively. DialogueGCN (Ghosal et al., 2019) captures conversational dependencies between utterances with a graph-based structure. MMGCN (Hu et al., 2021) further proposes a GCN-based multimodal fusion method for multimodal ERC tasks to improve recognition performance. DialogXL (Shen et al., 2020) first introduces a strong pre-trained language model XLNet for text-based ERC. It also proposes several masking strategies of self-attention to model the global, local, interspeaker, and intra-speaker interactions.",
"In order to build a large-scale, diversified, and high-quality multimodal emotional dialogue dataset, we collect video dialogue clips from different TV series, which can simulate spontaneous emotional behavior in the real-world environment (Dhall et al., 2012; Li et al., 2018; Poria et al., 2019a).",
"Since high-quality conversation video clips are very important, we require the crowd workers to follow the strict selection requirements, including the following major aspects: 1) The required TV series should belong to these categories, such as family, romance, soap opera, and modern opera, which have rich and natural emotional expressions. 2) The workers are required to select 15 25 high-quality emotional dialogue video clips from each TV series. 3) Each dialogue should have at least 3 rounds of interaction and a clear conversation topic. 4) In order to ensure the quality of the visual and acoustic modalities, the workers are required to select two-person dialogue scenes with clear facial expressions and intelligible voices.",
"After the dialogue selection, we randomly check several dialogues for each TV series and filter out the low-quality dialogues or ask the crowd workers to correct the inappropriate start and end timestamps.",
"In order to facilitate the process of emotion annotation, we first require the crowd workers to correct the text content and annotate the speaker info of each utterance. Since the videos of TV series do not have embedded subtitles, we use the OCR-5701",
"based (Optical Character Recognition) method 2 to automatically generate the text content and the corresponding timestamps. For speaker annotations, the first speaker in the dialogue is annotated as A, and the other speaker is annotated as B. In addition, we annotate the role names, ages and genders of these speakers as well.",
"We annotate each utterance based on Ekman's six basic emotions ( happy, surprise, sad, disgust, anger, and fear ) and an additional emotion label neutral , which is an annotation scheme widely used in previous works (Poria et al., 2019a; Busso et al., 2008). The annotators are asked to sequentially annotate the utterances, after watching the videos. Thus, the textual, acoustic and visual information, and the previous utterances in the dialogue are available for emotional annotation. The annotators are allowed to select more than one emotional label to account for blended emotions (e.g., anger&sad), which are commonly observed in real-life human interactions (Devillers et al., 2005). If none of the seven emotion categories can accurately describe the emotion status of the utterance, a special other category can be annotated.",
"In order to obtain high-quality annotations, we together with several emotional psychology experts design an annotation tutorial with reference to previous guidelines (Ekman, 1992; Campos et al., 2013). We train the annotators and provide them with an examination, and only those who pass the exam can participate in the annotation stage. The vast majority of the dataset is annotated by university students and all the annotators are native Mandarin speakers. We assign three annotators to each dialogue.",
"We apply the majority voting strategy over all the annotations of an utterance to produce its final emotion label. Please note that annotators are allowed to assign more than one emotion label to an utterance, and the importance of these labels is in descending order. We simply assign an importance value to the emotion label of each utterance in descending order, e.g. I ( e ) = 7 for the first emotion label, I ( e ) = 6 for the second emotion label, and so on. If a label is not assigned to the utterance, its importance value I ( e ) = 0 . An emotion label",
"e is assigned as one of the final emotion labels for an utterance, if it is assigned to the utterance by at least two annotators. And its importance value is decided by averaging its importance ranking from all annotators: I ( e ) = (cid:80) 3 k =1 I k ( e ) , where I k ( e ) is its importance value from annotator k .",
"To further ensure annotation quality, we design two strategies to review and revise incorrect annotations. 1) We calculate the annotation agreement between the annotators of each dialogue. For the dialogues with a poor agreement, we require all relevant annotators to review the annotations again and make corrections if necessary. 2) For the utterances (0.5% of all utterances) that don't have a majority annotators' agreement, we ask several high-quality annotators to review them and make a final emotion annotation decision for these utterances.",
"Finally, we analyze the inter-annotators agreement and achieve an overall Fleiss' Kappa (Fleiss et al., 2013) statistic of k = 0 . 59 for a seven-class emotion problem, which is higher than other datasets, such as k = 0 . 43 in MELD, k = 0 . 48 in",
"Table 2 presents several basic statistics of the M 3 ED dataset. It contains 990 dialogues, 9,082 turns, 24,449 utterances derived from 56 different TV series (about 500 episodes), which ensures the scale and diversity of the dataset. We adopt the TV-independent data split manner in order to avoid any TV-dependent bias, which means there is no overlap of TV series across training, validation, and testing sets. The basic statistics are similar across these three data splits. There are rich emotional interactions phenomena in the M 3 ED, for example, 5,396 and 2,696 inter-turn emotion-shift and emotion-inertia scenarios respectively, and 2,879 and 10,891 intra-turn emotion-shift and emotion-inertia scenarios. The emotion shift and emotion inertia are two important factors in dialogues, which are challenging and worthy of exploration (Poria et al., 2019a). As shown in the table, 89% of utterances are assigned with one emotion label, and 11% of utterances are assigned with blended emotions 3 .",
"Table 3 presents the single emotion distribution statistics. The distribution of each emotion cate-3",
"cate-3 The top 5 most frequent blended emotions are: anger&disgust, anger&sad, sad&anger, disgust&anger and fear&sad",
"gory is similar across train/val/test sets. As shown in Table 4, there are in total 626 different speakers in M 3 ED with balanced gender distribution. Among all the speakers, young and middle-aged speakers account for more than 80%.",
"A dialogue can be defined as a sequence of utterances D = { utt 1 , utt 2 , ..., utt N } , where N is the number of utterances. Each utterance consists of textual ( l ), acoustic ( a ) and visual ( v ) modalities. We denote u At [ a, v, l ] as the utterance-level feature of utterance utt t from speaker A with the textual, acoustic and visual modality respectively. The task aims to predict the emotional state for each utterance in the dialogue based on all existing modalities. Figure 2 illustrates our proposed Multimodal Dialogue-aware Interaction (MDI) framework, which contains three main modules: 1) Multimodal Fusion module aims to generate the utterance-level multimodal representation from different modalities. 2) Dialog-aware Interaction module aims to model the interactions in the dialogue; 3) Interaction Fusion and Classification module fuses the different interaction information from the outputs of the Dialog-aware Interaction module, and then makes the emotional state prediction based on the fused interaction information.",
"Multimodal Fusion Module: Based on the modality-specific feature representations from different modalities, we apply early fusion of these modalities features to produce the multimodal feature representation: u = concat ( u [ a ] , u [ v ] , u [ l ]) .",
"Dialog-aware Interaction Module: In order to adequately capture the contextual information in the dialogue, we propose the Dialog-aware Interaction Module which consists of L dialog-aware interaction blocks (gray block in Figure 2).",
"In each block, we adopt four sub-modules, Global Interaction, Local Interaction, Intra-speaker Interaction and Inter-speaker Interaction, to model the global, local, intra-speaker and inter-speaker interactions in the dialogue respectively.",
"We implement these four types of interactions in one Transformer layer by skillfully changing the masking strategies of self-attention (Shen et al., 2020; Li et al., 2020) as illustrated in Figure 3.",
"Interaction Fusion and Classification: As the Dialog-aware Interaction Module produces different outputs that carry various interaction contextual information, we fuse these outputs via simple addition.",
"Finally, we use one fully connected layer as a classifier to predict the emotional state based on the fused interaction information.",
"We investigate the state-of-the-art features of different modalities including textual, acoustic, and visual features for emotion recognition tasks 4 .",
"4 More detailed description of the feature extractors can be found in the supplementary material.",
"A.2",
"Textual Features: We extract the word-level features from a pre-trained RoBERTa model (Yu et al., 2020).",
"Furthermore, to get more efficient emotional features, we extract the finetuned features ([CLS] position) from the finetuned RoBERTa model trained on M 3 ED.",
"We refer to the word-level and finetuned utterance-level textual features as L_Frm, and L_Utt respectively.",
"Acoustic Features: We extract the frame-level features from a pre-trained Wav2Vec2.0 model (Baevski et al., 2020).",
"We extract the finetuned features (the last time step) from the Wav2Vec2.0 model finetuned on M 3 ED.",
"We refer to the frame-level and finetuned utterance-level acoustic features as A_Frm and A_Utt respectively.",
"Visual Features: We first propose a two-stage strategy to detect the speaker's faces 5 .",
"We then extract the face-level features via a pre-trained DenseNet model (Huang et al., 2017) for each utterance based on the detected speaker's faces.",
"DenseNet was trained on two facial expression benchmark corpus, FER+ (Barsoum et al., 2016) and AffectNet (Mollahosseini et al., 2017).",
"We average the face-level features within one utterance to get the averaged utterance-level features.",
"We refer to the face-level, averaged utterance-level visual features as V_Frm, V_Utt respectively.",
"We evaluate several state-of-the-art methods including utterance-level recognition methods and dialog-level recognition methods on the proposed M 3 ED dataset, and they are listed as follows:",
"MultiEnc: A flexible and efficient utterance-level multimodal emotion recognition framework (Zhao et al., 2021) that consists of several modality-specific encoders (LSTM, LSTM and TextCNN for acoustic, visual and textual modalities respectively) and a fusion encoder (several fully-connected layers) for emotion prediction.",
"For the utterance-level modality features, three DNN encoders are used for the three modalities respectively.",
"DialogueRNN: A state-of-the-art RNN-based ERC framework proposed in (Majumder et al., 2019), which captures the global and speaker-specific temporal context information, and global emotional state information via different GRUs.",
"For the multimodal experiments, the early-fusion method that concatenates different modality features as input is adopted in this work.",
"DialogueGCN: A state-of-the-art GCN-based ERC framework proposed in (Ghosal et al., 2019), which models long-distance dependency and speaker interactions via direct edges and different designed relations respectively.",
"For the multimodal experiments, we also adopt the early-fusion method in this work.",
"MMGCN: A state-of-the-art GCN-based multimodal ERC framework proposed in (Hu et al., 2021).",
"For the uni-modal experiments, we only model the fully connected graph.",
"We split the M 3 ED dataset into training, validation, testing sets in a TV-independent manner, which is a more challenging experiment setting.",
"The distribution of the data splits is shown in Table 3.",
"We use the weighted-F1 score (WF1) as the evaluation metrics.",
"We tune the parameters on the validation set and report the performance on the testing set.",
"We run each model three times and report the average performance to alleviate the influence of random parameter initialization.",
"We conduct two sets of experiments, including 1) the utterance-level baseline experiments of emotion recognition on isolated utterances without considering dialogue context, which aims to check the quality of each modality and compare the effectiveness of multimodal information for emotion recognition, and 2) the dialogue-level experiments of emotion recognition in the dialogue, which aims to compare our proposed general MDI framework with the state-of-the-art models in modeling dialogue context for emotion recognition.",
"For the utterance-level experiments, we adopt the Multi-Enc (Section 5.2) framework as the baseline model.",
"For the dialogue-level experiments, we compare to DialogueRNN, DialogueGCN, and MMGCN models.",
"Since different modality features are used in this work, we have tried different hidden sizes (such as 180, 256, and 512) in our experiments.",
"For the experiments on the proposed Multimodal Dialog-aware Interaction framework (Section 4), we use the Adam optimizer with learning rate of 3e-5.",
"We set the dropout as 0.1, the hidden size as 384 in the unimodal experiments and 512 in the multimodal experiments.",
"Table 5 presents the utterance-level baseline results.",
"Among the different unimodal features, the Table 5: Utterance-level baseline performance (WF1) of different features and different modalities.",
"finetuned utterance-level features achieve significant improvement on textual and acoustic modalities.",
"The multimodal information can bring significant performance improvement over unimodal.",
"However, for the multimodal experiments, the finetuned features do not show much improvement over the frame-level features.",
"It is mainly because the finetuned features retain more classification information and lose some modality-specific information, which limits the complementarity between the modalities.",
"In addition, we observe that there is no big gap between the performances on different modalities, which indicates the good quality of different modalities in our M 3 ED dataset.",
"Since the state-of-the-art dialogue-level methods mainly focus on modeling the dialogue context information based on the utterance-level features, we adopt the finetuned utterance-level features (Utt_ft) in the following experiments.",
"Table 6 presents the dialogue-level experiment results.",
"The results show that context information and multiple modalities, the two salient factors of a multimodal emotion dialogue dataset, both bring significant performance improvement, which also proves the validity and quality of the M 3 ED dataset to some extend.",
"Compared to the state-of-the-art models, our proposed general MDI framework achieves superior performance in the textual, acoustic, and visual unimodal experiments.",
"It demonstrates that the four dialogue-aware interaction strategies which consider both the globaland local-context interactions and the intraand inter-speaker interactions have better dialogue modeling ability than only considering part of these interactions, which demon-5705 Table 6: Emotion recognition performance (WF1) in dialogues under the unimodal and multimodal conditions.",
"strates the strong dialogue context modeling ability of MDI.",
"However, MDI does not outperform other models under the multimodal conditions, which may be due to the limited training dataset size and the limited ability of the vanilla multimodal fusion strategy in interaction modeling.",
"In the future, we will explore more effective multimodal fusion module and interaction modeling module within the MDI framework to improve its performance under multimodal conditions.",
"The M 3 ED dataset is a large, diversified, high-quality, and comprehensive multimodal emotional dialogue dataset.",
"Based on the characteristics of the dataset and the analysis from the extensive experiments, we believe that M 3 ED can support a number of related explorations in affective computing field.",
"Based on the experiment results, we think that the finetuned features lack sufficient modality-specific information, which limits the performance under the multimodal conditions.",
"Therefore, it is worth exploring to realize a more efficient multimodal fusion module based on the raw frame-level features and make the above proposed general Multimodal Dialog-aware Interaction (MDI) framework an end-to-end model.",
"According to psychological and behavioral studies, emotional inertia and stimulus (exter-nal/internal) are important factors that affect the speaker's emotional state in dialogues.",
"The emotional inertia and emotional stimulus can explain how one speaker's emotion affects his own or the other speaker's emotion.",
"There are rich emotional interaction phenomena including interand intra-turn emotion shifts in the M 3 ED dataset.",
"Therefore, it can support the exploration of interpretability of emotional changes in a Dialogue.",
"The blended emotions are commonly observed in human real-life dialogues, and multi-label learning can help reveal and model the relevance between different emotions.",
"Therefore, the M 3 ED dataset can support the exploration of multi-label emotion recognition in conversations.",
"Emotional expression varies across different languages and cultures.",
"The M 3 ED dataset in Chinese is a valuable addition to the existing benchmark datasets in other languages.",
"It can promote the research of cross-culture emotion analysis and recognition.",
"In this work, we propose a multi-modal, multi-scene, and multi-label emotional dialogue dataset, M 3 ED, for multimodal emotion recognition in conversations.",
"Compared to MELD, the currently largest multimodal dialogue dataset for emotion recognition, M 3 ED is larger (24,449 vs. 13,708 ut-terances), more diversified (56 different TV series vs. only one TV series Friends), with higher-quality (balanced performance across all three modalities), and containing blended emotions annotation which is not available in MELD.",
"M 3 ED is the first multimodal emotion dialogue dataset in Chinese, which can serve as a valuable addition to the affective computing community and promote the research of cross-culture emotion analysis and recognition.",
"Furthermore, we propose a general Multimodal Dialog-aware Interaction framework, which considers multimodal fusion, temporal-context modeling, and speaker interactions modeling, and achieves the state-of-the-art performance.",
"We also propose several interesting future exploration directions based on the M 3 ED dataset.",
"This work was partially supported by the National Key R&D Program of China (No. 2020AAA0108600), the National Natural Science Foundation of China (No. 62072462), Large-Scale Pre-Training Program 468 of Beijing Academy of Artificial Intelligence (BAAI), A*STAR RIE2020 Advanced Manufacturing and Engineering Domain (AME) Programmatic Grant (No. A1687b0033), NRF Centre for Advanced Robotics Technology Innovation (CARTIN) Project and China Scholarship Council.",
"This work presents M 3 ED, free and open dataset for the research community to study the multimodal emotion recognition in dialogues.",
"Data in M 3 ED are collected from TV series in Chinese.",
"To ensure that crowd workers were fairly compensated, we paid them at an hourly rate of 40 yuan ($6.25 USD) per hour, which is a fair and reasonable hourly wage in Beijing.",
"First, to select high-quality dialogues from 56 TV-series, we recruited 12 Chinese college students (5 males and 7 females).",
"Each student was paid 100 yuan ($15.625 USD) for selecting about 18 dialogues from each TV series.",
"To annotate the emotional status of the selected dialogues, we recruited 14 Chinese college students (6 males and 8 females).",
"Each student was paid 200 yuan ($31.25 USD) for annotating about 18 dialogues from each TV series with emotion labels, text correction, speaker, gender, and age information.",
"If only the emotion labels were annotated, the payment for each TV series was 100 yuan ($15.625 USD).",
"Considering the copy-right issue of TV-series, we will only release the name list of the TV-series and our annotations.",
"To facilitate future comparison research on this dataset, we will provide our extracted visual expression features and acoustic features.",
"We anticipate that the high-quality and rich annotation labels in the dataset will advance research in multimodal emotion recognition."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"objective",
"objective",
"result",
"result",
"other",
"abstain",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain"
] |
[
"Knowledge graph based simple question answering (KBSQA) is a major area of research within question answering.",
"Although only dealing with simple questions, i.e., questions that can be answered through a single knowledge base (KB) fact, this task is neither simple nor close to being solved.",
"Targeting on the two main steps, subgraph selection and fact selection, the research community has developed sophisticated approaches.",
"However, the importance of subgraph ranking and leveraging the subjectrelation dependency of a KB fact have not been sufficiently explored.",
"Motivated by this, we present a unified framework to describe and analyze existing approaches.",
"Using this framework as a starting point, we focus on two aspects: improving subgraph selection through a novel ranking method and leveraging the subjectrelation dependency by proposing a joint scoring CNN model with a novel loss function that enforces the well-order of scores.",
"Our methods achieve a new state of the art (85.44% in accuracy) on the SimpleQuestions dataset.",
"Knowledge graph based simple question answering (KBSQA) is an important area of research within question answering, which is one of the core areas of interest in natural language processing (Yao and Van Durme, 2014; Yih et al., 2015; Dong et al., 2015; Khashabi et al., 2016; Zhang et al., 2018; Hu et al., 2018).",
"It can be used for many applications such as virtual home assistants, customer service, and chat-bots.",
"A knowledge graph is a multi-entity and multi-relation directed graph containing the information needed to answer the questions.",
"The graph can be represented as collection of triples { (subject, relation, Work conducted during an internship at Alexa AI, CA. object) } .",
"Each triple is called a fact , where a directed relational arrow points from subject node to object node.",
"A simple question means that the question can be answered by extracting a single fact from the knowledge graph, i.e., the question has a single subject and a single relation, hence a single answer.",
"For example, the question Which Harry Potter series did Rufus Scrimgeour appear in? can be answered by a single fact (Ru-fus Scrimgeour, book.book-characters.appears-in-book, Harry Potter and the Deathly Hallows).",
"Given the simplicity of the questions, one would think this task is trivial.",
"Yet it is far from being easy or close to being solved.",
"The complexity lies in two aspects.",
"One is the massive size of the knowledge graph, usually in the order of billions of facts.",
"The other is the variability of the questions in natural language.",
"Based on this anatomy of the problem, the solutions also consist of two steps: (1) selecting a relatively small subgraph from the knowledge graph given a question and (2) selecting the correct fact from the subgraph.",
"Different approaches have been studied to tackle the KBSQA problems.",
"The common solution for the first step, subgraph selection (which is also known as entity linking), is to label the question with subject part ( mention ) and nonsubject part ( pattern ) and then use the mention to retrieve related facts from the knowledge graph, constituting the subgraph.",
"Sequence labeling models, such as a BiLSTM-CRF tagger (Huang et al., 2015), are commonly employed to label the mention and the pattern.",
"To retrieve the subgraph, it is common to search all possible n -grams of the mention against the knowledge graph and collect the facts with matched subjects as the subgraph.",
"The candidate facts in the subgraph may contain incorrect subjects and relations.",
"In our running example, we first identify the mention in the question, i.e.,Rufus Scrimgeour, and then retrieve the subgraph which could contain the following facts: { (Rufus Scrimgeour, book.book-characters.appears-in-book, Harry Potter and the Deathly Hallows), (Rufus Wainwright, music.singer.singer-of, I Don't Know What That Is) } .",
"For the second step, fact selection, a common approach is to construct models to match the mention with candidate subjects and match the pattern with candidate relations in the subgraph from the first step.",
"For example, the correct fact is identi-fied by matching the mention Rufus Scrimgeour with candidate subjects { Rufus Scrimgeour, Rufus Wainwright } and matching the pattern Which Harry Potter series did m appear in with candidate relations { book.book-characters.appears-in-book, music.singer.singer-of } .",
"Different neural network models can be employed (Bordes et al., 2015; Dai et al., 2016; Yin et al., 2016; Yu et al., 2017; Petrochuk and Zettlemoyer, 2018).",
"Effective as these existing approaches are, there are three major drawbacks.",
"(1) First, in subgraph selection, there is no effective way to deal with inexact matches and the facts in subgraph are not ranked by relevance to the mention; however, we will later show that effective ranking can substantially improve the subgraph recall.",
"(2) Second, the existing approaches do not leverage the dependency between mentionsubjects and patternrelations; however, mismatches of mentionsubject can lead to incorrect relations and hence incorrect answers.",
"We will later show that leveraging such dependency contributes to the overall accuracy.",
"(3) Third, the existing approaches minimize the ranking loss (Yin et al., 2016; Lukovnikov et al., 2017; Qu et al., 2018); however, we will later show that the ranking loss is suboptimal.",
"Addressing these points, the contributions of this paper are three-fold: (1) We propose a subgraph ranking method with combined literal and semantic score to improve the recall of the subgraph selection.",
"It can deal with inexact match, and achieves better performance compared to the previous state of the art. (2) We propose a low-complexity joint-scoring CNN model and a well-order loss to improve fact selection.",
"It couples the subject matching and the relation matching by learning order-preserving scores and dynamically adjusting the weights of scores.",
"(3) We achieve better performance (85.44% in accuracy) than the previous state of the art on the SimpleQuestions dataset, surpassing the best baseline by a large margin 1 .",
"The methods for subgraph selection fall in two schools: parsing methods (Berant et al., 2013; Yih et al., 2015; Zheng et al., 2018) and sequence tagging methods (Yin et al., 2016).",
"The latter proves to be simpler yet effective, with the most effective model being BiLSTM-CRF (Yin et al., 2016; Dai et al., 2016; Petrochuk and Zettlemoyer, 2018).",
"The two categories of methods for fact selection are match-scoring models and classification models.",
"The match-scoring models employ neural networks to score the similarity between the question and the candidate facts in the subgraph and then find the best match.",
"For instance, Bor-des et al. (2015) use a memory network to encode the questions and the facts to the same representation space and score their similarities.",
"Yin et al. (2016) use two independent models, a character-level CNN and a word-level CNN with attentive max-pooling.",
"Dai et al. (2016) formulate a two-step conditional probability estimation problem and use BiGRU networks.",
"Yu et al. (2017) use two separate hierarchical residual BiLSTMs to represent questions and relations at different abstractions and granularities.",
"Qu et al. (2018) propose an attentive recurrent neural network with similarity matrix based convolutional neural network (AR-SMCNN) to capture the semantic-level and literal-level similarities.",
"In the classification models, Ture and Jojic (2017) employ a two-layer BiGRU model.",
"Petrochuk and Zettlemoyer (2018) employ a BiLSTM to classify the relations and achieve the state-of-the-art performance.",
"In addition, Mohammed et al. (2018) evaluate various strong baselines with simple neural networks (LSTMs and GRUs) or non-neural network models (CRF).",
"Lukovnikov et al. (2017) propose an end-to-end word/character-level encoding network to rank subjectrelation pairs and retrieve relevant facts.",
"However, the multitude of methods yield progressively smaller gains with increasing model complexity (Mohammed et al., 2018; Gupta et al., 1 Ture and Jojic (2017) reported better performance than us but neither Petrochuk and Zettlemoyer (2018) nor Mohammed et al. (2018) could replicate their result.",
"2018).",
"Most approaches focus on fact matching and relation classification while assigning less emphasis to subgraph selection.",
"They also do not sufficiently leverage the important signature of the knowledge graphthe subjectrelation dependency, namely, incorrect subject matching can lead to incorrect relations.",
"Our approach is similar to (Yin et al., 2016), but we take a different path by focusing on accurate subgraph selection and utilizing the subjectrelation dependency.",
"We provide a unified description of the KBSQA framework.",
"First, we define Definition 1.",
"Answerable Question A question is answerable if and only if one of its facts is in the knowledge graph.",
"Let Q := { q | q is anwerable } be the set of answerable questions, and G := { ( s, r, o ) | s S , r R , o O} be the knowledge graph, where S , R and O are the set of subjects, relations and objects, respectively.",
"The triple ( s, r, o ) is a fact .",
"By the definition of answerable questions, the key to solving the KBSQA problem is to find the fact in knowledge graph corresponding to the question , i.e., we want a map : Q G .",
"Ideally, we would like this map to be injective such that for each question, the corresponding fact can be uniquely determined (more precisely, the injection maps from the equivalent class of Q to G since similar questions may have the same answer, but we neglect such difference here for simplic-ity).",
"However, in general, it is hard to find such map directly because of (1) the massive knowledge graph and (2) natural language variations in questions.",
"Therefore, end-to-end approaches such as parsing to structured query and encoding-decoding models are difficult to achieve (Yih et al., 2015; Sukhbaatar et al., 2015; Kumar et al., 2016; He and Golub, 2016; Hao et al., 2017).",
"Instead, related works and this work mitigate the difficulties by breaking down the problem into the aforementioned two steps, as illustrated below: (1) Subgraph Selection: question { mention, pattern } , mention subgraph (2) Fact Selection: match (cid:40) mention subject pattern relation ( subject, relation ) subgraph (subject*, relation*) object* (answer*) In the first step, the size of the knowledge graph is significantly reduced.",
"In the second step, the variations of questions are confined to mention subject variation and patternrelation variation.",
"Formally, we denote the questions as the union of mentions and patterns Q = M (cid:83) P and the knowledge graph as the subset of the Cartesian product of subjects, relations and objects G S R O .",
"In the first step, given a question q Q , we find the mention via a sequence tagger : Q M , q (cid:55) m q .",
"The tagged mention consists of a sequence of words m q = { w 1 , . . . , w n } and the pattern is the question excluding the mention p q = q \\ m q .",
"We denote the set of n -grams of m q as W n ( m q ) and use W n ( m q ) to retrieve the subgraph as S q R q O q G q := { ( s, r, o ) G | W n ( s ) (cid:84) W n ( m q ) (cid:54) = , n = 1 , . . . , | m q |} .",
"Next, to select the correct fact (the answer) in the subgraph, we match the mention m q with candidate subjects in S q , and match the pattern p q with candidate relations in R q .",
"Specifically, we want to maximize the log-likelihood (cid:40) max s S q log P ( s | m q ) max r R q log P ( r | p q ) .",
"(1) The probabilities in (1) are modeled by P ( s | m q ) = e h ( f ( m q ) ,f ( s )) (cid:80) s (cid:48) S q e h ( f ( m q ) ,f ( s (cid:48) )) (2) P ( r | p q ) = e h ( g ( p q ) ,g ( r )) (cid:80) r (cid:48) R q e h ( g ( p q ) ,g ( r (cid:48) )) , (3) where f : M (cid:83) S R d maps the mention and the subject onto a d -dimensional differentiable manifold embedded in the Hilbert space and similarly, g : P (cid:83) R R d .",
"Both f and g are in the form of neural networks.",
"The map h : R d R d R is a metric that measures the similarity of the vector representations (e.g., the cosine similarity).",
"Practically, directly optimizing (1) is difficult because the subgraph G q is large and computing the partition functions in (2) and (3) can be intractable.",
"Alternatively, a surrogate objective, the ranking loss (or hinge loss with negative samples) (Col-lobert and Weston, 2008; Dai et al., 2016) is minimized L rank = (cid:88) q Q (cid:88) s S q (cid:2) h f ( m q , s ) h f ( m q , s + ) + (cid:3) + + (cid:88) r R q (cid:2) h g ( p q , r ) h g ( p q , r + ) + (cid:3) + , (4) where h f ( , ) = h ( f ( ) , f ( )) , h g ( , ) = h ( g ( ) , g ( )) ; the sign + and indicate correct candidate and incorrect candidate, [ ] + = max( , 0) , and > 0 is a margin term.",
"Other variants of the ranking loss are also studied (Cao et al., 2006; Zhao et al., 2015; Vu et al., 2016).",
"To retrieve the subgraph of candidate facts using n -gram matching (Bordes et al., 2015), one first constructs the map from n -grams W n ( s ) to subject s for all subjects in the knowledge graph, yielding {W n ( s ) s | s S , n = 1 , . . . , | s |} .",
"Next, one uses the n -grams of mention W n ( m ) to match the n -grams of subjects W n ( s ) and fetches those matched facts to compose the subgraph { ( s, r, o ) G | W n ( s ) (cid:84) W n ( m ) (cid:54) = , n = 1 , . . . | m |} .",
"In our running example, for the mention Rufus Scrimgeour, we collect the subgraph of facts with the bigrams and unigrams of subjects matching the bigram { Rufus Scrimgeour } and unigrams { Rufus, Scrimgeour } .",
"One problem with this approach is that the retrieved subgraph can be fairly large.",
"Therefore, it is desirable to rank the subgraph by relevance to the mention and only preserve the most relevant facts.",
"To this end, different ranking methods are used, such as surface-level matching score with added heuristics (Yin et al., 2016), relation detection network (Yu et al., 2017; Hao et al., 2018), term frequency-inverse document frequency (TF-IDF) score (Ture and Jojic, 2017; Mohammed et al., 2018).",
"However, these ranking methods only consider matching surface forms and cannot handle inexact matches, synonyms, or polysemy (New York , the New York City, Big Ap-ple).",
"This motivates us to rank the subgraph not only by literal relevance but also semantic relevance.",
"Hence, we propose a ranking score with literal closeness and semantic closeness.",
"Specifically, the literal closeness is measured by the length of the longest common subsequence | | ( s, m ) between a subject s and a mention m .",
"The semantic closeness is measured by the co-occurrence probability of the subject s and the mention m P ( s, m ) = P ( s | m ) P ( m ) = P ( w 1 , . . . w n | (cid:101) w 1 , . . . (cid:101) w m ) P ( (cid:101) w 1 , . . . (cid:101) w m ) (5) = n (cid:89) i =1 P ( w i | (cid:101) w 1 , . . . (cid:101) w m ) P ( (cid:101) w 1 , . . . (cid:101) w m ) (6) = n (cid:89) i =1 (cid:32) m (cid:89) k =1 P ( w i | (cid:101) w k ) (cid:33) P ( (cid:101) w 1 , . . . (cid:101) w m ) (7) = n (cid:89) i =1 (cid:32) m (cid:89) k =1 P ( w i | (cid:101) w k ) (cid:33) m 1 (cid:89) j =1 P ( (cid:101) w j +1 | (cid:101) w j ) P ( (cid:101) w 1 ) , (8) where from (5) to (6) we assume conditional independence of the words in subject and the words in mention; from (6) to (7) and from (7) to (8) we factorize the factors using the chain rule with conditional independence assumption.",
"The marginal term P ( (cid:101) w 1 ) is calculated by the word occurrence frequency.",
"Each conditional term is approximated by P ( w i | w j ) exp { w Ti w j } where w i s are pre-trained GloVe vectors (Pennington et al., 2014).",
"These vectors are obtained by taking into account the word co-occurrence probability of surrounding context.",
"Hence, the GloVe vector space encodes the semantic closeness.",
"In practice we use the log-likelihood as the semantic score to convert multiplication in (8) to summation and normalize the GloVe embeddings into a unit ball.",
"Then, the score for ranking the subgraph is the weighted sum of the literal score and the semantic score score ( s, m ) = | | ( s, m ) + (1 ) log P ( s, m ) , (9) where is a hyper-parameter whose value need to be tuned on the validation set.",
"Consequently, for each question q , we can get the topn ranked subgraph G nq as well as the corresponding topn ranked candidate subjects S nq and relations R nq .",
"Once we have the ranked subgraph, next we need to identify the correct fact in the subgraph.",
"One school of conventional methods (Bordes et al., 2014, 2015; Yin et al., 2016; Dai et al., 2016) is minimizing the surrogate ranking loss (4) where neural networks are used to transform the (subject, mention) and (relation, pattern) pairs into a Hilbert space and score them with inner product.",
"separately, neglecting the difference of their contributions to fact matching.",
"Given that the number of subjects (order of millions) are much larger than the number of relations (order of thousands), incorrect subject matching can lead to larger error than incorrect relation matching.",
"Therefore, matching the subjects correctly should be given more importance than matching the relations.",
"Further, the ranking loss is suboptimal, as it does not preserve the relative order of the matching scores.",
"We empirically find that the ranking loss tends to bring the matching scores to the neighborhood of zero (during the training the scores shrink to very small numbers), which is not functioning as intended.",
"To address these points, we propose a joint-scoring model with well-order loss (Figure 1).",
"Together they learn to map from joint-input pairs to order-preserving scores supervised by a well-order loss, hence the name.",
"The joint-scoring model takes joint-input pairs, (subject, mention) or (rela-tion, pattern), to produce the similarity scores directly.",
"The well-order loss then enforces the well-order in scores.",
"A well-order, first of all, is a total ordera binary relation on a set which is antisymmetric, transitive, and connex.",
"In our case it is just the relation.",
"In addition, the well-order is a total order with the property that every non-empty set has a least element.",
"The well-order restricts that the scores of correct matches are always larger or equal to the scores of incorrect matches, i.e., i : j : S + i S j where S + i and S i indicate the score of correct match and the score of incorrect match.",
"We derive the well-order loss in the following way.",
"Let S = { S 1 , . . . , S n } = S + (cid:83) S be the set of scores where S + and S are the set of scores with correct and incorrect matches.",
"Let I = I + (cid:83) I be the index set of S , | I + | = n 1 , | I | = n 2 , n = n 1 + n 2 .",
"Following the well-order relation inf S + sup S i + I + : i I : S + i + S i 0 (cid:88) i + I + (cid:88) i I ( S + i + S i ) 0 (10) n 2 (cid:88) i + I + S + i + n 1 (cid:88) i I S i 0 , (11) where from (10) to (11) we expand the sums and reorder the terms.",
"Consequently, we obtain the well-order loss L well-order ( S ms , S pr ) = (cid:34) | I + | (cid:88) i S i ms | I | (cid:88) i + S i + ms + | I + || I | (cid:35) + + | J + | (cid:88) j S j pr | J | (cid:88) j + S j + pr + | J + || J | + , (12) where S ms , S pr are the scores for (mention, sub-ject), (pattern, relation) pairs for a question, I , J are the index sets for candidate subjects, relations in the ranked subgraph, + , indicate the correct candidate and incorrect candidate, [ ] + = max( , 0) , and > 0 is a margin term.",
"Then, the objective (1) becomes min q Q , ( s,r ) S nq R nq (cid:34) | I + | (cid:88) i h f ( m q , s i ) | I | (cid:88) i + h f ( m q , s i + ) + | I + || I | (cid:35) + + | J + | (cid:88) j h g ( p q , r j ) | J | (cid:88) j + h g ( p q , r j + ) + | J + || J | + .",
"(13)",
"This new objective with well-order loss differs from the ranking loss (4) in two ways, and plays a vital role in the optimization.",
"First, instead of considering the match of mentionsubjects and patternrelations separately, (13) jointly considers both input pairs and their dependency .",
"Specifically, (13) incorporates such dependency as the weight factors | I | (for subjects) and | J | (for re-lations).",
"These factors are the controlling factors and are automatically and dynamically adjusted as they are the sizes of candidate subjects and relations.",
"Further, the match of subjects, weighted by ( I + , I ), will control the match of relations, weighted by ( J + , J ).",
"To see this, for a question and a fixed number of candidate facts in subgraph, | I | = | J | , the incorrect number of subjects | I | is usually larger than the incorrect number of relations | J | , which causes larger loss for mismatching subjects.",
"As a result, the model is forced to match subjects more correctly, and in turn, prune the relations with incorrect subjects and reduce the size of J , leading to smaller loss.",
"Second, the well-order loss enforces the well-order relation of scores while the ranking loss does not have such constraint.",
"Here, we evaluate our proposed approach for the KBSQA problem on the SimpleQuestions benchmark dataset and compare with baseline approaches.",
"The SimpleQuestions (Bordes et al., 2015) dataset is released by the Facebook AI Research.",
"It is the standard dataset on which almost all previous state-of-the-art literature reported their num-bers (Gupta et al., 2018; Hao et al., 2018).",
"It also represents the largest publicly available dataset for KBSQA with its size several orders of magnitude larger than other available datasets.",
"It has 108 , 442 simple questions with the corresponding facts from subsets of the Freebase (FB2M and FB5M).",
"There are 1 , 837 unique relations.",
"We use the default train, validation and test partitions (Bordes et al., 2015) with 75 , 910 , 10 , 845 and 21 , 687 questions, respectively.",
"We use FB2M with 2 , 150 , 604 entities, 6 , 701 relations and 14 , 180 , 937 facts, respectively.",
"For sequence tagging, we use the same BiLSTM-CRF model as the baseline (Dai et al., 2016) to label each word in the question as either subject or non-subject.",
"The configurations of the model (Table 1) basically follow the baseline (Dai et al., 2016).",
"For subgraph selection, we use only unigrams of the tagged mention to retrieve the candidate facts (see Section 3.2) and rank them by the proposed relevance score (9) with the tuned weight = 0 .",
"9 (hence more emphasizing on literal matching).",
"We select the facts with topn scores as the subgraphs and compare the corresponding recalls with the baseline method (Yin et al., 2016).",
"For fact selection, we employ a character-based CNN (CharCNN) model to score (mention, subject) pairs and a word-based CNN (WordCNN) model to score (pattern, relation) pairs (with model configurations shown in Table 2), which is similar to one of the state-of-the-art baselines AMPCNN (Yin et al., 2016).",
"In fact, we first replicated the AMPCNN model and achieved comparable results, and then modified the AMPCNN model to take joint inputs and output scores directly (see Section 3.3 and Figure 1).",
"Our CNN models have only two convolutional layers (ver-sus six convolutional layers in the baseline) and have no attention mechanism, bearing much lower complexity than the baseline.",
"The CharCNN and WordCNN differ only in the embedding layer, the former using character embeddings and the latter using word embeddings.",
"The optimizer used for training the models is Adam (Kingma and Ba, 2014).",
"The learning configurations are shown in Table",
"3. For the hyper-parameters shown in Table 1, 2 and 3, we basically follow the settings in baseline literature (Yin et al., 2016; Dai et al., 2016) to promote a fair comparison.",
"Other hyper-parameters, such as the in the relevance score (9), are tuned on the validation set.",
"Our proposed approach and the baseline approaches are evaluated in terms of (1) the topn subgraph selection recall (the percentage of questions that have the correct subjects in the top-n candidates) and (2) the fact selection accuracy (i.e., the overall question answering accuracy).",
"Subgraph selection The subgraph selection results for our approach and one of the state-of-the-art baselines (Yin et al., 2016) are summarized in Table",
"4. Both the baseline and our approach use unigrams to retrieve candidates.",
"The baseline ranks the candidates by the length of the longest common subsequence with heuristics while we rank the candidates by the joint relevance score de-fined in (9).",
"We see that the literal score used in the baseline performs well and using the semantic score (the log-likelihood) (8) only does not outperform the baseline (except for the top50 case).",
"This is due to the nature of how the questions in the SimpleQuestions dataset are generatedthe majority of the questions only contain mentions matching the subjects in the Freebase in the lexical level, making the literal score sufficiently effective.",
"However, we see that combining the literal score and semantic score outperforms the baseline by a large margin.",
"For top1 , 5 , 10 , 20 , 50 recall our ranking approach surpasses the baseline by 11 .",
"9 %, 5 .",
"4 %, 4 .",
"6 %, 3 .",
"9 %, 4 .",
"1 %, respectively.",
"Our approach also surpasses other baselines (Lukovnikov et al., 2017; Yu et al., 2017; Qu et al., 2018; Gupta et al., 2018) under the same settings.",
"We note that the recall is not monotonically Rank Method Top-N Recall | | + heuristics 1 0.736 Literal: 5 0.850 10 0.874 20 0.888 (Yin et al., 2016) 50 0.904 100 0.916 log P 1 0.482 Semantic: 10 0.753 20 0.854 50 0.921 100 0.848 0 .",
"increasing with the topn .",
"The reason is that, as opposed to conventional methods which rank the entire subgraph returned from unigram matching to select the topn candidates, we choose only the first 200 candidates from the subgraph and then rank them with our proposed ranking score.",
"This is more efficient, but at the price of potentially dropping the correct facts.",
"One could trade effi-ciency for accuracy by ranking all the candidates in the subgraph.",
"Fact selection The fact selection results for our approach and baselines are shown in Table",
"5. The object accuracy is the same as the overall question answer accuracy.",
"Recall that in Section 3.3 we explained that the weight components in the well-order loss (13) are adjusted dynamically in the training to impose a larger penalty for mentionsubject mismatches and hence enforce correct matches.",
"This can be observed by looking at the different loss components and weights as well the subject and relation matching accuracies during the training.",
"As weights for mention subject matches increase, the losses for mention subject matches also increase, while both the errors for mentionsubject matches and pattern relation matches are high.",
"To reduce the errors, the model is forced to match mentionsubject more correctly.",
"As a result, the corresponding weights and losses decrease, and both mentionsubject and patternrelation match accuracies increase.",
"Effectiveness of well-order loss and joint-Approach Obj.",
"scoring model The first and second row of Table 5 are taken from the baseline AMPCNN (Yin et al., 2016) and BiLSTM (Petrochuk and Zettlemoyer, 2018) (the state of the art prior to our work 2 ).",
"The third row shows the accuracy of the baseline with our proposed well-order loss and we see a 1 .",
"3 % improvement, demonstrating the effectiveness of the well-order loss.",
"Further, the fourth row shows the accuracy of our joint-scoring (JS) model with well-order loss and we see a 3 % improvement over the best baseline 3 , demonstrating the effectiveness of the joint-scoring model.",
"Effectiveness of subgraph ranking The fifth row of Table 5 shows the accuracy of our joint-scoring model with well-order loss and top50 ranked subgraph and we see a further 4 .",
"3 % improvement over our model without subgraph ranking (the fourth row), and a 7 .",
"3 % improvement over the best baseline.",
"In addition, the subject accuracy increases by 4 .",
"0 %, which is due to the subgraph ranking.",
"Interestingly, the relation accuracy increases by 7 .",
"8 %, which supports our claim that improving subject matching can improve relation matching.",
"This demonstrates the effectiveness of our subgraph ranking and joint-scoring approach.",
"The sixth row shows the accuracy of our joint-scoring model with well-order loss and only the top1 subject.",
"In this case, the subject accuracy is limited by the top1 recall which is 85 .",
"5 %.",
"Despite that, our approach outperforms the best baseline by 1 .",
"2 %.",
"Further, the relation accuracy increases by 7 .",
"1 % over the fifth row, because restricting the subject substantially confines 2 As noted, Ture and Jojic (2017) reported better performance than us but neither Petrochuk and Zettlemoyer (2018) nor Mohammed et al. (2018) could replicate their result.",
"the choice of relations.",
"This shows that a sufficiently high top1 subgraph recall reduces the need for subject matching.",
"In order to analyze what constitutes the errors of our approach, we select the questions in the test set for which our best model has predicted wrong answers, and analyze the source of errors (see Table 6).",
"We observe that the errors can be categorized as follows: (1) Incorrect subject prediction; however, some subjects are actually correct, e.g., the prediction New York v.s. New York City. (2) Incorrect relation prediction; however, some relations are actually correct, e.g., the prediction fictional-universe.fictional-character.character-created-by v.s. book.written-work.author in the question Who was the writer of Dark Sun? and music.album.genre v.s. music.artist.genre. (3) Incorrect prediction of both.",
"However, these three reasons only make up 59.43% of the errors.",
"The other 40.57% errors are due to: (4) Ambiguous questions, which take up the majority of the errors, e.g., Name a species of fish. or What movie is a short film?",
"These questions are too general and can have multiple correct answers.",
"Such issues in the SimpleQuestions dataset are analyzed by Petrochuk and Zettlemoyer (2018) (see further discussion on this at the end of this Section).",
"(5) Non-simple questions, e.g., Which drama film was released in 1922?",
"This question requires two KB facts instead of one to answer correctly.",
"(6) Wrong fact questions where the reference fact is non-relevant, e.g., What is an active ingredient in Pacific? is labeled with Triclosan 0.15 soap.",
"(7) Out of scope questions, which have entities or relations out the scope of FB2M.",
"(8) Spelling inconsistencies, e.g., the predicted answer Operation Shylock: A Con-fession v.s. the reference answer Operation Shy-lock, and the predicted answer Tom and Jerry: Robin Hood and His Merry Mouse v.s. the reference answer Tom and Jerry.",
"For these cases, even when the models predict the subjects and relations correctly, these questions are fundamentally unanswerable .",
"Although these issues are inherited from the dataset itself, given the large size of the dataset and the small proportion of the problematic questions, it is sufficient to validate the reliability and significance of our performance improvement and conclusions.",
"Answerable Questions Redefined Petrochuk and Zettlemoyer (2018) set an upper bound of 83.4% for the accuracy on the SimpleQuestions dataset.",
"However, our models are able to do better than the upper bound.",
"Are we doing something wrong?",
"Petrochuk and Zettlemoyer (2018) claim that a question is unanswerable if there exist multiple valid subjectrelation pairs in the knowledge graph, but we claim that a question is unanswerable if and only if there is no valid fact in the knowledge graph.",
"There is a subtle difference between these two claims.",
"Based on different definitions of answerable questions, we further claim that incorrect subject or incorrect relation can still lead to a correct answer.",
"For example, for the question What is a song from Hier Komt De Storm? with the fact (Hier Komt De Storm: 1980-1990 live, music.release.track-list, Stephanie), our predicted subject Hier Komt De Storm: 1980-1990 live does not match the reference subject Hier Komt De Storm, but our model predicts the correct answer Stephanie because it can deal with inexact match of the subjects.",
"In the second example, for the question Arkham House is the publisher behind what novel?, our predicted relation book.book-edition.publisher does not match the reference relation book.publishing-company.books-published, but our model predicts the correct answer Watchers at the Strait Gate because it can deal with paraphrases of relations.",
"In the third example, for the question Who was the king of Lydia and Croesus's fa-ther?, the correct subject Croesus ranks second in our subject predictions and the correct relation people.person.parents ranks fourth in our relation predictions, but our model predicts the correct answer Alyattes of Lydia because it reweighs the scores with respect to the subjectrelation dependency and the combined score of subject and relation ranks first.",
"To summarize, the reason that we are able to redefine answerable questions and achieve significant performance gain is that we take advantage of the subgraph ranking and the subjectrelation dependency.",
"In this work, we propose a subgraph ranking method and joint-scoring approach to improve the performance of KBSQA.",
"The ranking method combines literal and semantic scores to deal with inexact match and achieves better subgraph selection results than the state of the art.",
"The joint-scoring model with well-order loss couples the dependency of subject matching and relation matching and enforces the order of scores.",
"Our proposed approach achieves a new state of the art on the SimpleQuestions dataset, surpassing the best baseline by a large margin.",
"In the future work, one could further improve the performance on simple question answering tasks by exploring relation ranking, different embedding strategies and network structures, dealing with open questions and out-of-scope questions.",
"One could also consider extending our approach to complex questions, e.g., multi-hop questions where more than one supporting facts is required.",
"Potential directions may include ranking the subgraph by assigning each edge (relation) a closeness score and evaluating the length of the shortest path between any two path-connected entity nodes.",
"The authors would like to thank anonymous reviewers.",
"The authors would also like to thank Nikko Strom and other Alexa AI team members for their feedback."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"result",
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other"
] |
[
"As language technologies become more ubiquitous, there are increasing efforts towards expanding the language diversity and coverage of natural language processing (NLP) systems.",
"Arguably, the most important factor influenc-ing the quality of modern NLP systems is data availability.",
"In this work, we study the geographical representativeness of NLP datasets, aiming to quantify if and by how much do NLP datasets match the expected needs of the language speakers.",
"In doing so, we use entity recognition and linking systems, presenting an approach for good-enough entity linking without entity recognition first.",
"Last, we explore some geographical and economic factors that may explain the observed dataset distributions.",
"1 1 Introduction The lack of linguistic, typological, and geographical diversity in NLP research, authorship, and publications is by now widely acknowledged and documented (Caines, 2019; Ponti et al., 2019; Bender, 2011; Adelani et al., 2021).",
"Nevertheless, the advent of massively multilingual models presents opportunity and hope for the millions of speakers of under-represented languages that are currently under-served by language technologies.",
"Broadening up the NLP community's research efforts and scaling from a handful up to the almost 7000 languages of the world is no easy feat.",
"In order for this effort to be efficient and successful, the community needs some necessary foundations to build upon.",
"In seminal work, Joshi et al. (2020) provide a clear overview of where we currently stand with respect to data availability for the world's languages and relate them to the languages' representation in NLP conferences.",
"Choudhury and 1 Code and data are publicly available: https://github.",
"com/ffaisal93/dataset_geography .",
"Additional visualizations are available in the project page: https://nlp.cs.gmu.",
"edu/project/datasetmaps/ .",
"Deshpande (2021) study how linguistically fair are multilingual language models, and provide a nuanced framework for evaluating multilingual models based on the principles of fairness in economics and social choice theory.",
"Last, Blasi et al. (2022) provide a framework for relating NLP systems' performance on benchmark datasets to their downstream utility for users at a global scale, which can provide insights into development priorities; they also discuss academic incentives and socioeconomic factors that correlate with the current status of systematic cross-lingual inequalities they observe in language technologies performance.",
"These works provide insights into current data availability and estimated utility that are paramount for making progress, as well as an evaluation framework for future work.",
"However, there is one missing building block necessary for real progress: a way to estimate how representative of the underlying language speakers is the content of our datasets.",
"Any evaluation framework and any utility estimates 3381 we build can only be trustworthy as long as the evaluation data are representative.",
"Gebru et al. (2021) and Bender and Friedman (2018) recognize the importance of this information, including them in their proposed guidelines for datasheets and data statements respectively; but most datasets unfortunately lack such meta-information.",
"To the best of our knowledge, MaRVL (Liu et al., 2021) is the only dataset that is culturally-aware by design in terms of its content.",
"2 We propose a method to estimate a dataset's cultural representativeness by mapping it onto the physical space that language speakers occupy, producing visualizations such as Figure 1. Our contributions are summarized below: We present a method to map NLP datasets unto geographical areas (in our case, countries) and use it to evaluate how well the data represent the underlying users of the language.",
"We perform an analysis of the socio-economic correlates of the dataset maps we create.",
"We find that dataset representativeness largely correlates with economic measures (GDP), with geographical proximity and population being secondary.",
"We test a simple strategy for performing entity linking by-passing the need for named entity recognition.",
"We evaluate its efficacy on 19 languages, showing that we can get within up to 85% of a NER-informed harder-to-obtain model.",
"We also show that encouragingly, using either model largely leads to similar dataset maps.",
"Assumptions This work makes two assumptions: that",
"(a) data locality matters, i.e., speakers of a language are more likely to talk about or refer to local news, events, entities, etc as opposed to ones from a different side of the world, and",
"(b) that we can capture this locality by only focusing on entities.",
"Kumar et al. (2019) discuss these topical correlations that are present in datasets, 3 noting that they exist and that L1 language identification models tend to pick up on them, i.e. if a text mentions Finland, a L1 langid model is probably going to predict that the speaker is Finnish, because p ( Finland L1 = Finnish ) is generally high.",
"In that work Kumar et al. (2019) make explicit effort 2 Datasets designed to capture dialectal variations, e.g., SD-QA (Faisal et al., 2021), are culturally-aware in terms of annotator selection, but there is no guarantee that their content is also culturally-relevant for the language speakers.",
"3 See 2 of their paper.",
"to avoid learning such correlations because they are interested in building models for p ( L1 text ) (i.e. p ( L1 = Finnish Finland ) ) that are not confounded by the reverse conditional.",
"The mere fact they need to do this, though, confirms that real-world text has such topical confounds.",
"As for our second assumption that we can capture these topical correlations by only looking at entities, one need only to take a look at Table 2 of Kumar et al. (2019), which lists the top topical confounding words based on log-odds scores for each L1 language in their dataset: all lists include either entities related to a country where that language is spoken (e.g. Merkel', the name of a former chancellor, for German) or topical adjectives (e.g. romanian' for Romanian).",
"Approach For a given dataset, our method follows a simple recipe: 1. Identify named entities present in the dataset.",
"2. Perform entity linking to wikidata IDs.",
"3. Use Wikidata to link entities to countries.",
"We discuss each step below.",
"Entity Recognition Step Standard entity linking is treated as the sequence of two main tasks: entity recognition and entity disambiguation.",
"One approach is to first process the text to extract entities and then disambiguate these entities to the correct entries of a given knowledge base (eg. Wikipedia).",
"This approach relies on NER model quality.",
"However, to perform analysis on several datasets spanning several low-resource languages, one needs good-quality NER models in all these languages.",
"The interested reader will find a discussion on the cross-lingual consistency of NER models in Appendix F. 4 As we show in Section 4, we can bypass this NER step if we tolerate a small penalty in accuracy.",
"Entity Linking Step In this step we map named entities to their respective Wikidata IDs.",
"We further discuss this step in Section 4.",
"From Entities to Countries We produce maps to visualize the geographical coverage of the datasets we study, discussing their properties and our findings in Section 3.",
"4 Discussion summary: state-of-the-art NER models are not cross-lingually consistent, i.e. they do not produce the same entity labels when presented with translations of the same sentence.",
"We recommend using parallel data as part of the evaluation sets in multiple languages to measure this important aspect of models' performance.",
"To link entities to countries, 5 we rely on Wikidata entries, depending on the type of entity: for persons, we log their places of birth (P19) and death (P20), and country of citizenship (P27); for locations, we search for their associated country (P17); and for organizations, we use the links of the lo-cated_at' (P276) and headquartered_at' (P159) relations.",
"Since places of birth/death and headquarters are not necessarily at the country level, we perform a second step of associating these locations with countries.",
"In cases where the result does not correspond to a modern-day country (as can often be the case with historical figures), we do not make any attempts to link it to any modern day countries, excluding them from the analysis.",
"For example, the entry for Nicolaus Copernicus (Q619) lists him as born in Torun (Q47554) which is then mapped to Poland; as having died in From-bork (Q497115) that also maps to Poland; and as a citizen of the Kingdom of Poland (Q1649871) which is not mapped to any modern-day country; so he is only linked to Poland.",
"Albert Einstein is similarly mapped to both Germany and the United States, due to his places of birth (Ulm) and death (Princeton).",
"Before delving into our case studies, we first list a set of statistics of interest that one could extract from our produced dataset-country maps, in order to gauge a dataset's representativeness.",
"Representativeness Measures We will avoid providing a single metric, largely because the ideal metric to use will be very dataset-specific and related to the goals of the creators of the dataset and the socioeconomic correlates they are interested in (see discussion in Section 3.3).",
"As a first straightforward representativeness measure, we will compute the percentage of entities associated with countries where the language is largely spoken .",
"For example, according to Ethnologue (Eberhard et al., 2021), most Swahili speakers 6 reside in Tanzania, Kenya, Uganda, DR. Congo, and Rwanda.",
"For a Swahili dataset, then, we compute the percentage of all entities associated with this set of countries ( in-country ).",
"Notions of equity or fairness across countries could be measured by various fairness metrics, given the distribution of entities over countries in a dataset: from simply computing the standard deviation of the observations, 7 to treating countries as a population and computing fairness indices like the popular Gini index (Gini, 1912; Gastwirth, 1972) or the indices proposed by Speicher et al. (2018).",
"We will opt for a simpler, much more interpretable measure, the number of countries not represented in the dataset i.e. countries with associated entity count below a given threshold (we use zero for simplicity but higher values would also be reasonable for large datasets).",
"Last, especially for languages with significant amounts of speakers in more than one country, it is important to go deeper and measure the representativeness of this in-country portion.",
"For a simple example, an English dataset with entities only from the UK is probably not representative of Nigerian or Jamaican English speakers.",
"Hence, we will create two distributions over the countries where the language is largely spoken: the distribution of speaker populations (as available from Ethnologue and other public data), and the distribution of entities observed in the dataset.",
"Discrepancies between these two distributions will reveal potential issues.",
"While one could easily compute some measure of distance between the two distributions (e.g. the Bhattacharyya coefficient (Bhattacharyya, 1943)), in this work we will rely on the interpretable advantages of the visualizations.",
"Measures of fairness could be computed for this portion of the dataset, similarly as discussed above.",
"In the example dataset of the Swahili portion of MasakhaNER in Figure 1, the utility of our method is apparent.",
"Through the visualization, a researcher can quickly confirm that the dataset seems to not reflect the users of the language to a large extent: only about 17% of the entities indeed correspond to Tanzania, Kenya, Uganda, DR. Congo, or Rwanda (where Swahili and its varieties are treated as a lingua franca, at least in portions of these coun-tries).",
"Wealthy or populous countries like USA, France, and China, are well-represented, 8 as one would expect, while 156 countries and territories have no representation.",
"At the same time, the visualization allows a researcher to identify gaps: 7 Or approximations thereof such as the max-min of the observations, as used by (Debnath et al., 2021).",
"beyond the neighboring African countries and perhaps the Middle East, north-west African countries as well as central America or central/south-east Asia are clearly under-represented in this dataset.",
"Between the main Swahili-speaking countries, Tanzania, Kenya, and Uganda are well-represented (DR Congo and Rwanda less so, but they have less Swahili speakers), with the former two perhaps slightly over-represented and the latter (as well as Rwanda) being under-represented relative to the speakers population, c.f. red (dataset entities) and green (proportional to population) bars in Figure 1. 3.1 Datasets and Settings We apply the process described above on several datasets, chosen mostly for their language and typological diversity.",
"Our process is not datasetor language-dependent, 9 and could easily be applied on any NL dataset.",
"We briefly describe the datasets we include in our study below, with detailed statistics in Appendix C. NER Datasets We study the WikiANN dataset (Pan et al., 2017) that is commonly used in the evaluation of multilingual models.",
"We additionally study the MasakhaNER dataset (Ade-lani et al., 2021), which was created through participatory design ( et al., 2020) in order to focus on African languages.",
"Since these datasets are already annotated with named entities, we only need to perform entity linking.",
"rather than contexts), namely SQuAD (Rajpurkar et al., 2016), MLQA (Lewis et al., 2020), TyDiQA (Clark et al., 2020), and Natural Questions (Kwiatkowski et al., 2019, NQ;), which have unique characteristics that lend themselves to interesting comparisons.",
"SQuAD is a large English-only dataset (although it has been translated through efforts like XQuAD (Artetxe et al., 2020)).",
"MLQA is a n -way parallel multilingual dataset covering 7 languages, created by translating an English dataset.",
"TyDi-QA is another multilingual dataset covering 11 languages, but each language portion is derived separately, without translation involved.",
"Last, NQ is an English QA dataset created based on real-world queries on the Google search engine for which annotators found relevant Wikipedia context, unlike the other datasets that were created by annotators forming questions given a context.",
"Additional Datasets While not further discussed in this paper, additional visualizations for more datasets (e.g. for the X-FACTR benchmark (Jiang et al., 2020), and several machine translation benchmarks) are available in the project's webpage: https://nlp.cs.gmu.edu/ project/datasetmaps/ .",
"Beyond Figure 1, we also show example maps in Figure 2 for NQ, MLQA, SQuAD, and the English portion of TyDi-QA.",
"We provide additional maps for all other datasets in Appendix G. Comparing datasets The comparison of MasakhaNER to the WikiANN dataset (see Appendix G) reveals that the former is rather more localized (e.g. more than 80% of the identified entities in the Dholuo dataset are related to Kenya) while the latter includes a smaller portion from the countries where most native speakers reside (between 10%-20%) and almost always also includes several entries that are very Europeanor western-centric.",
"The effect of the participatory design ( et al., 2020) approach on creating the MasakhaNER dataset, where data are curated from local sources, is clear in all language portions of the dataset, with data being highly representative of the speakers.",
"In Figures 89 (App. G) the majority of entities in the Wolof portion are from Senegal and neighboring countries (as well as France, the former colonial power of the area), and the Yoruba and Igbo ones are centered on Nigeria.",
"Figure 2 allows for a direct comparison of different QA datasets (also see maps for other TyDiQA languages in Appendix G).",
"The first notable point has to do with NQ, which was built based on real-world English-language queries to the Google search engine.",
"Since such queries happen all over the world, this is reflected in the dataset, which includes entities from almost all countries in the world.",
"Two types of countries are particularly represented: ones where English is an official language (USA, UK, Australia, but also, to a lesser extent, India, Nigeria, South Africa, and the Philippines); and wealthy ones (European, Japan, China, etc).",
"In our view, NQ is an exemplar of a representative dataset, because it not only includes representation of most countries where the language is spoken (with the sum of these entities being in their large majority in-country: 80%) but due to its size it also includes entities from almost all countries.",
"SQuAD also has a large percentage in-country (63%) but it is less representative of different Englishes than NQ.",
"India, for instance, is relatively under-represented in all datasets; in SQuAD it ranks 7 th , but it ranks 3 rd in NQ (see red bars in bottom left of figures).",
"On the other hand, the geographical representativeness of both MLQA and TyDi-QA (their English portion) is lacking.",
"Since these datasets rely on Wikipedia articles for their creation, and Wikipedia has a significant western-country bias (Greenstein and Zhu, 2012; Hube and Fetahu, 2018), most entities come from Europe, the US, and the Middle East.",
"All these datasets underrepresent English speakers from English-speaking countries of the Global South like Kenya, South Africa, or Nigeria, since there are practically almost no entities from these countries.",
"MLQA further under-represents the speakers of all other languages it includes beyond English, since all data are translations of the English one.",
"Contrast this to TyDi-QA and its visualized Swahili portion which, even though still quite western-centric, does have a higher representation from countries where Swahili is spoken than the TyDi-QA English portion.",
"This discussion brings forth the importance of being cautious with claims regarding systems' utility, when evaluated on these datasets.",
"One could argue that a QA system that is evaluated on NQ does indeed give a good estimation of real-world utility; a system evaluated on TyDi-QA gives a distorted notion of utility (biased towards western-based speakers and against speakers from the Global 3385 TyDi-QA (11) MLQA (1) SQUAD (1) NaturalQ.",
"South); a system evaluated on MLQA will give an estimation as good as one evaluated on TyDi-QA, but only on the English portion.",
"We clarify that this does not diminish the utility of the datasets themselves as tools for comparing models and making progress in NLP: MLQA is extremely useful for comparing models across languages on the exact same data , thus facilitating easy comparisons of the cross-lingual abilities of QA systems, without the need for approximations or additional statistical tests.",
"But we argue that MLQA should not be used to asses the potential utility of QA systems for German or Telugu speakers.",
"Similar observations can be made about comparing two similar projects that aim at testing the memorization abilities of large language models, namely X-FACTR and multi-LAMA (mLAMA; Kassner et al., 2021) see corresponding Figures in Appendix G. Both of these build on top of Wikidata and the mTREx dataset.",
"However, mLAMA translates English prompts and uses entity-relation triples mined from the English portion of Wikidata, unlike X-FACTR which uses different data for each language, mined from their respective portion of Wikidata.",
"Both are still western-biased, since they rely on Wikipedia, but one (X-FACTR) is better at giving an indication of potential downstream utility to users.",
"In this section we attempt to explain our findings from the previous section, tying them to socioeconomic factors.",
"Empirical Comparison of Factors We identify socioeconomic factors that could be used to explain the observed geographic distribution of the entities in the datasets we study.",
"These are: a country's population pop a country's gross domestic product (GDP) gdp a country's GDP per capita gdppc a country's landmass land a country's geographical distance from coun-try/ies where the language is spoken geo The first four factors are global and fixed.",
"The fifth one is relative to the language of the dataset we are currently studying.",
"For example, when we focus on the Yoruba portion of the mTREx dataset, we use Nigeria (where Yoruba is spoken) as the focal point and compute distances to all other countries.",
"The assumption here is that a Yoruba speaker is more likely to use or be interested in entities first from their home country (Nigeria), then from its neighboring countries (Cameroon, Chad, Niger, Benin) and less likely of distant countries (e.g. Argentina, Canada, or New Zealand).",
"Hence, we assume the probability to be inversely correlated with the country's distance.",
"For macro-languages or ones used extensively in more than one country, we use a population-weighted combination of the factors of all relevant countries.",
"To measure the effect of such factors it is common to perform a correlational analysis, where one measures Spearman's rank correlation coefficient between the dataset's observed geographical distribution and the factors .",
"It is important, though, that the factors are potentially covariate, particularly population and GDP.",
"Hence, we instead compute the variance explained by a linear regression model with factors as input, i.e., a pop + b gdp + c gdppc + d geo + e with a e learned parameters, trained to predict the log of observed entity count of a country.",
"We report explained variance and mean absolute error from five-fold cross-validation experiments to avoid overfitting.",
"results with different combination of factors for the QA datasets are listed in Table 1. 10 The best sin-10",
"sin-10 See Appendix H for NER datasets, and Appendix I for breakdown by language for all datasets.",
"gle predictor is, perhaps unsurprisingly, the GDP of the countries where the language is spoken: all datasets essentially over-represent wealthy countries (e.g. USA, China, or European ones).",
"Note that GDP per capita is not as good a predictor, neither is landmass.",
"A combination of geographical distance with GDP explains most of the variance we observe for all datasets, an observation that confirms the intuitions we discussed before based solely on the visualizations.",
"Importantly, the fact that including population statistics into the model deteriorates its performance is further proof that our datasets are not representative of or proportional to the underlying populations.",
"The only dataset that is indeed better explained by including population (and GDP per capita) is NQ, which we already argued presents an exemplar of representativeness due to its construction protocol.",
"Limitations It is important to note that our assumptions are also limiting factors in our analyses.",
"Mapping languages to countries is inherently lossy.",
"It ignores, for instance, the millions of immigrants scattered throughout the world whose L1 language could be different than the dominant language(s) in the region where they reside.",
"Another issue is that for many languages the necessary granularity level is certainly more fine than country; if a dataset does not include any entities related to the Basque country but does include a lot of entities from Spain and France, our analysis will incorrectly deem it representative, even though the dataset could have been a lot more culturally-relevant for Basque speakers by actually including Basque-related entities.",
"Another limitation lies in the current state of the methods and data resources on which our approach relies.",
"Beyond discrepancies in NER/EL across languages (addressing which is beyond the scope of this work), we suspect that Wikidata suffers from the same western-centric biases that Wikipedia is known for (Greenstein and Zhu, 2012).",
"As a result, we might be underestimating the cultural representativeness of datasets in low-resource languages.",
"An additional hurdle, and why we avoid providing a single concrete representativeness score or something similar, is that the ideal combination of socioeconomic factors can be subjective.",
"It could be argued, for instance, either that geographic proximity by itself should be enough, or that it should not matter at all.",
"Even further, other factors that we did not consider (e.g. literacy rate or web access) might influence dataset construction decisions.",
"In any case, we share the coefficients of the NQ model, since it is the most representative dataset we studied, at least for English: a = 0 .",
"1 .",
"46 (for pop ), b = 0 .",
"87 ( gdp ), c = 25 .",
"4 ( gdppc ), d = 0 .",
"41 ( geo ).",
"We believe that ideally GDP should not matter ( b 0) and that a combination of speaker population and geographic proximity is ideal.",
"11 3.4 Geographical Breakdown of Models' Performance Beyond the analysis of the datasets themselves, we can also break down the performance of models by geographical regions, by associating test (or dev) set samples containing entities with the geographical location of said entities.",
"Since most test sets are rather small (a few hundred to a couple thousand instances) we have to coarsen our analysis: we map each country to a broader region (Africa, Americas, Asia, Europe, Oceania), keeping historical entities in a separate category (History).",
"12 We perform such a case study on TyDi-QA, comparing the performance on the TyDi-QA development sets of two models: one trained mono-lingually on the training set of each language of TyDi-QA (gold task), and another model trained by Debnath et al. (2021) on English SQuAD and automatically generated translations in the target languages.",
"Example results on Telugu shown in Figure 3 reveal some notable trends.",
"13 First, training set representation (green bars in the Figures) is not a necessary condition for good test set performance (red bars).",
"Some test set instances (e.g. with historical and African entities) receive similar test F1 score from both models.",
"Perhaps the most interesting though, is the comparison of the Asian and European portions of the test set: the Telugu monolingual model achieves similar performance in these two subsets; but the SQuAD-trained model is almost 20 percentage points worse on the Asian subset, showing the potential unfairness of translation-based models (Debnath et al., 2021).",
"For most TyDi-QA languages (Indonesian being an exception, see Table 2) the macro-standard deviation (computed over the averages of the 6 region subsets) is larger for the SQuAD-trained model (which is, hence, less fair than models trained on 11 However regrettable a fact, it is undeniable that western culture and politics have world-wide effects.",
"So their (over)representation as a result of their high influence (and GDP) might actually reflect the true interests of people everywhere!",
"12 Future work could explore a different clustering.",
"13 See Table 4 in Appendix D for all languages.",
"We use mGENRE (Cao et al., 2021) for the task of multilingual entity linking, a sequence to sequence system that predicts entities in an auto-regressive manner.",
"It works particularly well in a zero-shot setting as it considers 100+ target languages as latent variables to marginalize over.",
"Typically, the input to mGENRE can be informed by a NER model that provides the named entity span over the source.",
"For instance, in the Italian sentence \"[START] Einstein [END] era un fisico tedesco.\" ( Einstein was a German physicist. ) the word Einstein is enclosed within the entity span.",
"mGENRE is trained to use this information to return the most relevant Wikidata entries.",
"Due to the plasticity of neural models and mGE-BRE's auto-regressive token generation fashion, we find that by simply enclosing the whole sentence in a span also yields meaningful results.",
"In particular, for the previously discussed Italian sentence now the input to mGENRE is \"[START] Einstein era un fisico tedesco. [END]\" .",
"The advantage of this approach is two-fold.",
"First, one does not need a NER component.",
"Second, exactly because of bypassing the NER component, the EL model is now less constrained in its output; in cases where the NER component made errors, there's a higher chance that the EL model will re-b e n h i n e s t q u e j a v r u s t u r c m n j p n language 0.0 0.2 0.4 0.6 0.8 1.0 a g r ee m e n t @ k WiKiANN k i n p c m y o r w o l i b o s w a h a u l u g l u o a m h language 0.0 0.2 0.4 0.6 0.8 1.0 MasakhaNER Comparing top-k k=1 k=2 k=3 Figure 4: For some languages a NER-Relaxed model is within 60% of a NER-Informed model.",
"Experiments and Results We conduct experiments to quantify how different a model uninformed by a NER model (NER-Relaxed) will perform compared to one following the typical pipeline (NER-Informed).",
"Given the outputs of the two models over the same set of sentences, we will compare their average agreement@ k , as in the size of the intersection of the outputs of the two models divided by the number of outputs of the NER-Informed model, when focusing only on their topk outputs.",
"14 We aggregate these statistics at the sentence level over the whole corpus.",
"We focus on two datasets, namely WikiANN and MasakhaNER, summarizing the results in Figure 4. 15 Comparing the general performance between these two datasets, it is clear that general agreement is decent.",
"In 7 Out of 9 typologically diverse languages from WikiANN, more than 60% top-1 entities are linked by both models.",
"The African languages from MasakhaNER are low-resource ones yielding less than 40% EL agreement to English in all cases.",
"Given that most of these languages have not been included in the pre-training of BART (the model mGENRE is based on), we expect that using AfriBERTa (Ogueji et al.) or similar models 14 Both models typically output between 13 entity links ranked according to their likelihood.",
"Effect on downstream maps We compare the dataset maps we obtain using NER-Relaxed and NER-Informed (using gold annotations) models in our pipeline for the MasakhaNER dataset.",
"Overall, the maps are very similar.",
"An example visualization of the two maps obtained for Swahili is in Figure 5 in Appendix E.1.",
"The NER-Informed model produces slightly fewer entities overall (likely exhibiting higher precision for lower link recall) but there are minimal differences on the representativeness measures e.g., the in-country percentage changes from 15.3% (NER-Informed) to 16.9% (NER-Relaxed).",
"We can compare the distributions of the topk countries obtained with the two models using Ranked Biased Overlap (RBO; higher is better; Webber et al., 2010).",
"16 The results for varying values for k (topk countries) are presented in Table 6 in Appendix E.1.",
"We overall obtain very high RBO values ( > . 8 for k = 10) for all language portions and all values of k .",
"For example for 8 of the 10 MasakhNER languages the two models almost completely agree on the top-10 countries with only slight variations in their ranking.",
"Dholuo and Amharic are the ones exhibiting the worse overlap (but still > . 5 RBO).",
"We present a recipe for visualizing how representative NLP datasets are with respect to the underlying language speakers.",
"We plan to further improve our tool 17 by making NER/EL models more robustly handle low-resource languages.",
"We will also expand our dataset and task coverage, to get a broader overview of the current utility of NLP systems.",
"This work is generously supported by NSF Awards 2040926 and 2125466."
] | [
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"method",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other"
] |
[
"Coherent discourse is distinguished from a mere collection of utterances by the satisfaction of a diverse set of constraints, for example choice of expression, logical relation between denoted events, and implicit compatibility with world-knowledge.",
"Do neural language models encode such constraints?",
"We design an extendable set of test suites addressing different aspects of discourse and dialogue coherence.",
"Unlike most previous coherence evaluation studies, we address specific linguistic devices beyond sentence order perturbations, allowing for a more fine-grained analysis of what constitutes coherence and what neural models trained on a language modelling objective do encode.",
"Extending the targeted evaluation paradigm for neural language models (Marvin and Linzen, 2018) to phenomena beyond syntax, we show that this paradigm is equally suited to evaluate linguistic qualities that contribute to the notion of coherence.",
"Statistical models trained on large amounts of data using the language modelling objective (predicting words in context) have shown to pick up an intriguing amount of implicit knowledge about other tasks, for example syntactic knowledge (Warstadt et al., 2020; Hu et al., 2020) or world knowledge (Trinh and Le, 2019; Tamborrino et al., 2020).",
"They have also been shown to exhibit, within these tasks, interesting divergences from expectation and sensitivity to confounding factors (e.g. McCoy et al. (2019)).",
"Inspired by the recently released SyntaxGym (Gauthier et al., 2020), which enables specific and standardised evaluation of syntactic knowledge encoded in such models, we explore whether similar methods can be applied to the study of discourse knowledge or coherence , i.e., constraints acting across sentence boundaries, as illustrated in (1) (where \"#\" marks the less acceptable variant).",
"(1)",
"a. #The lone ranger rode off into the sunset.",
"Then he jumped on his horse.",
"b. The lone ranger jumped on his horse.",
"Then he rode into the sunset.",
"A common approach to coherence evaluation consists in shuffling the sentence order of a text, thereby creating incoherent text samples that need to be discriminated from the original (Barzilay and Lapata, 2008).",
"While this approach to creating incoherent test data is intuitive enough, recent studies suggest that it paints only a partial picture of what constitutes coherence (Lai and Tetreault, 2018; Mohammadi et al., 2020; Pishdad et al., 2020).",
"It does not pinpoint the qualities that make the shuffled text incoherent, it does not tell us which linguistic devices are at fault, emphasising the need to move beyond this technique.",
"This paper aims to add to the growing body of research stressing the need for more qualitative evaluations of text coherence (See et al., 2019; Mohammadi et al., 2020; Pishdad et al., 2020).",
"We design different test suites created semiautomatically from existing corpora.",
"This eases the burden of creating them from scratch and ensures the inclusion of multiple genres, crucially including dialogue data.",
"Each test suite addresses a hypothesis about an underlying linguistic device contributing to a text's coherence, i.e., choice of referring expressions, discourse connectives, and intention (speaker commitment).",
"Our contributions are the following: We extend SyntaxGym to handle phenomena acting across sentence boundaries, but keep the general functionality to allow the use of both syntactic and coherence test suites, show that it is possible to evaluate dialogue models by extending lm-zoo (SyntaxGym's model repository), and present a first set of coherence test suites, each assessing a fine-grained and linguistically motivated element of coherence.",
"Our work thus eliminates the need for adapting and gathering various benchmark datasets by providing an easily extensible coherence evaluation framework that allows the use of existing test suites and the design of new ones.",
"At the moment, all of the test suites reported below are in English, but we come back to possible extensions in Section 5.",
"Our results are mixed: To the extent that the test suites effectively capture coherence, the examined models are neither systematically incoherent nor coherent.",
"We take this as support for our claim that more and better linguistically informed test suites are needed in oder to fully understand if neural models actually do capture genuine coherence.",
"We expect to develop our work further, but at this point, our contribution is a systematic framework that will allow us to do just that.",
"SyntaxGym.",
"Gauthier et al. (2020) develop a toolkit for targeted evaluation of language models on different syntactic phenomena.",
"It is built on top of lm-zoo , 1 a repository of language models that each specify their corresponding function to extract token level surprisal values s ( t ) from the language model's conditional token probabilities p .",
"Different syntactic phenomena can be evaluated by running models on different test suites.",
"Each test suite contains items with minimally different conditions, focusing on the specific phenomenon.",
"An example item for NUMBER AGREEMENT is given below.",
"https://cpllab.github.io/lm-zoo/",
"Each test suite also contains a prediction of the expected difference between conditions.",
"Splitting the input into different regions makes it possible to measure the difference in model predictions at the token or phrase level.",
"(e.g. region 2 in condition mismatch should be more surprising than region 2 in condition match ).",
"Coherence.",
"While the notion of syntactic acceptability is well studied from a linguistic point of view and in terms of neural language model representations (Marvin and Linzen, 2018; Warstadt et al., 2019, 2020; Hu et al., 2020, inter alia ), it remains less clear what neural models are capable of capturing when modelling language across sentence boundaries.",
"There exists a large body of work in linguistics regarding different notions of coherence, such as the influence of coreference (Hobbs, 1979; Barzilay and Lapata, 2008, inter alia ), Centering theory (Grosz et al., 1995), discourse structure (Mann and Thompson, 1987; Webber et al., 2003), and phenomena that connect utterances in dialogue, such as conversational maxims (Grice, 1975) or speaker interaction (Lascarides and Asher, 2009).",
"Many of these are also mentioned by coherence evaluation studies, nonetheless they mostly revert to the use of some form of sentence-order variations (Chen et al., 2019; Moon et al., 2019; Xu et al., 2019; Mesgar et al., 2020).",
"While some progress has been made towards incorporating more linguistically motivated test sets (Chen et al., 2019; Mohammadi et al., 2020; Pishdad et al., 2020), most evaluation studies focus on models trained specifically on coherence classification and prediction tasks.",
"Language models.",
"The recently proposed transformer language model GPT-2 (Radford et al., 2019) has been shown to perform very well on many downstream language tasks.",
"See et al. (2019) quantitatively evaluate GPT-2 as a language generator and find that it generally performs on par with a state-of-the-art neural story generation model.",
"However, they also note that their automatic measures focus mostly on text diversity and stress the need for more qualitative evaluation methods for notions like text coherence.",
"GPT-2 is also the basis of the recently proposed dialogue model DIALOGPT (Zhang et al., 2020), which is fine-tuned on conversational data from Reddit.",
"Mehri and Eskenazi (2020) argue that DIALOGPT encodes several notions of dialogue quality, including coherence.",
"They manually create several positive and negative follow-up utterances for certain dialog qualities (e.g. Wow, that's in-teresting!\" or I'm confused.\").",
"The likelihood of DIALOGPT outputting either of them is then used to give an overall score per quality.",
"The notion of dialogue coherence, although shown to be among the most important for predicting overall dialogue quality, is found to be one of the hardest to predict using this method.",
"The authors attribute this to the fact that coherence (or the lack thereof) is seldom verbalised, so the model is not able to associate this notion with specific follow-up utterances.",
"We take this a step back and evaluate the evaluator in order to get a better understanding of which notions of coherence are actually implicitly encoded in DIALOGPT.",
"We test GPT-2 and DIALOGPT on different notions of discourse and dialogue coherence by evaluating them on specifically designed test suites building on the SyntaxGym methodology.",
"We show that the methods implemented in SyntaxGym can also be applied to evaluate phenomena that go beyond a single sentence.",
"SyntaxGym is based on the psycholinguistically motivated notion of surprisal, which they utilise to compare the scores assigned by a language model to specific regions in a minimal pair of sentences.",
"In our CoherenceGym setting, the regions of interest comprise larger chunks up to whole sentences.",
"We calculate the models' token level surprisals and aggregate them over all tokens t 1 . . . t n in the region r of interest.",
"As the continuations may differ in more than one token and can be of different lengths, we use the mean region surprisal.",
"2 s mean ( r ) = 1 n n (cid:88) i =1 s ( t i ) (2) To create incoherent versions, we utilise several existing datasets and devise different modifications that target a concrete phenomenon.",
"We also include some existing methods and resources in order to demonstrate that those can easily be integrated and to cover a wide range of phenomena, which are 2 This required a slight adaptation of syntaxgym , which is now part of the official implementation.",
"described in detail in Section 4.",
"We further add DIALOGPT (Zhang et al., 2020) to the lm-zoo to show that the coherence test suites can also be used to evaluate dialogue models.",
"3 The Coherence Detection (CD) scores reported in Section 4 measure the proportion of items for which each model met the prediction of each test suite, i.e., the prediction accuracy of whether the model found the incoherent version more surprising than the coherent counterpart.",
"SyntaxGym is built as a wrapper on top of lm-zoo , a repository of language model Docker containers specifying the functions tokenizer , unkify and get_surprisals .",
"GPT-2 (117M) (Radford et al., 2019) is already included by the developers, based on the huggingface transformers library.",
"4 We use this version and add DIALOGPT (Zhang et al., 2020), which is built upon GPT-2, but further fine-tuned on Reddit data, in the same manner.",
"As Reddit contains multi-person dialogues, the separator token is taken to denote speaker change.",
"Both models compute the next token probability based on the softmax output of the final linear layer of the decoder.",
"Following the get_surprisals function for GPT-2, we transform the token probabilities into surprisals as shown in Equation",
"1. Each of the two models exist in different versions, depending on the number of parameters (em-bedding size, number of layers).",
"For technical reasons, we used the small version of GPT-2 (117M) and the medium version of DIALO GPT(345M), so the two models are not directly comparable.",
"As the aim of this study is to show that the surprisal based targeted evaluation paradigm is useful for coherence evaluation in general, we leave a detailed comparison of the impact of different model sizes to future work.",
"In this section, we describe the different coherence phenomena assessed by our test suites.",
"For every test suite we first posit a hypothesis, which is coded into the suite's prediction section.",
"Next, we describe the dataset and the manipulation applied 3 This implies some restrictions on compatibility though: All models should be able to predict discourse coherence phenomena, but only dialogue models need to additionally encode dialogue coherence.",
"4 https://huggingface.co/transformers/ to create incoherent samples that exhibit a violation of coherence regarding the specific phenomenon.",
"Each subsection reports the results of the evaluated models on the respective test suite.",
"As we evaluate models pre-trained on English data, our test suites are devised only in English as well.",
"The first three test suites are based on existing methods or test sets that we integrate into the framework.",
"The following three test suites are newly created.",
"Hypothesis: A coherent text is composed of an ordered set of sentences in a logical sequence; shuffling the sentences breaks the logical order and hence coherence.",
"Since sequentiality is central to the language modelling task, models successfully distinguish between both versions.",
"This shuffling technique has been widely applied in the evaluation of coherence models (Barzilay and Lapata, 2008; Chen et al., 2019; Moon et al., 2019; Xu et al., 2019; Mesgar et al., 2020).",
"We include it as baseline for our method, in order to contrast how more fine-grained notions of coherence compare to this broad approach.",
"We use ROCStories (Mostafazadeh et al., 2016) and the PERSONA-CHAT corpus (Zhang et al., 2018) to evaluate sentence order for narration as well as dialogue data.",
"The ROCStories corpus consists of coherent five-sentence stories which were gathered by employing crowdworkers and contain several temporal and causal relations between the sentences.",
"To create the PERSONA-CHAT corpus (Zhang et al., 2018), crowd sourced dialogue participants were assigned a persona in the form of descriptive natural language sentences and were asked to talk to each other impersonating their assigned persona.",
"The dialogues contain at least 6 turns and we extract only the utterances and ignore the persona descriptions.",
"Two versions are created of both corpora:",
"1. We shuffle all utterances and compare the aggregated overall surprisal for all tokens over all regions.",
"2. We keep the last utterance fixed and shuffle only the context and compare the aggregated surprisal for the second region (cf.",
"(3)).",
"lot of fun and always invite.",
"I finally decided to tag along last Saturday.",
"I danced terribly and broke a friend's toe.",
"region 2: The next weekend, I was asked to please stay home.",
"b. condition name: shuffled region 1: I finally decided to tag along last Saturday.",
"I danced terribly and broke a friend's toe.",
"My friends all love to go to the club to dance.",
"They think it's a lot of fun and always invite.",
"region 2: The next weekend, I was asked to please stay home.",
"Results.",
"As Table 1 shows, shuffling is a good first indicator for detecting coherence on a global level, as the models perform quite well in the conditions where all sentences have been shuffled.",
"5 On a local level (i.e., the influence that shuffling the context has on the following sentence), however, the ability to detect the manipulated sequence drops largely, even to or below chance.",
"A manual inspection of the data in the context condition revealed that, in some cases, the final (non-moved) utterance (region 2) also can be judged as a coherent follow-up to the utterance shuffled into the final context position.",
"This also reveals that shuffling does not always break coherence in the expected way due to the nature of natural language, thus highlighting the importance of a more thoughtful design of coherence test suites.",
"Hypothesis: Combining commonsense and discourse relations enables a model to detect a co-5",
"co-5 It is worth noting that by fine-tuning on user generated content, this ability decreases, which probably says more about Reddit than aboutD IALOGPT, but as noted before, these results are not directly comparable as the models are of different sizes.",
"herent from an incoherent ending of a given story.",
"We use the same corpus as for the narration shuffling condition above, but keep the order intact.",
"The Story Cloze test set (Mostafazadeh et al., 2016) contains an additional implausible ending to each story.",
"We use the annotated test set of the spring 2016 version and create items with different endings as exemplified in (4).",
"a. condition name: original ending region 1: My friends all love to go to the club to dance.",
"They think it's a lot of fun and always invite.",
"I finally decided to tag along last Saturday.",
"I danced terribly and broke a friend's toe.",
"region 2: The next weekend, I was asked to please stay home.",
"b. condition name: distractor ending region 1: My friends all love to go to the club to dance.",
"They think it's a lot of fun and always invite.",
"I finally decided to tag along last Saturday.",
"I danced terribly and broke a friend's toe.",
"region 2: My friends decided to keep inviting me out as I am so much fun.",
"Calculating our CD score allows for a direct evaluation of language models without the need for training a classifier on top of the model representations.",
"Results.",
"The first column in Table 2 displays the results on the Story Cloze test suite.",
"While these results leave room for improvement, it is worth noting that they are on par or even outperform the models from the original paper, which mostly rely on semantic similarities between the context and the continuations.",
"However, we still do not learn which linguistic devices are responsible for the perception of coherence or incoherence of a given ending from this data.",
"The following test suites are designed to investigate specific phenomena of coherence and models abilities to encode them in more detail.",
"Hypothesis: Models are able to combine commonsense knowledge with pronoun resolution, thus they are able to distinguish the correct target from the distractor in Winograd Schema style sentences.",
"This dataset was proposed by Trinh and Le (2019) Story Cloze Winograd full partial GPT-2 0 .",
"as has also been applied by Radford et al. (2019) for evaluating GPT-2's commonsense knowledge.",
"We reproduce the test suite in the following way: (5)",
"a. condition name: target region 1: The city councilmen refused the demonstrators a permit because region 2: the city councilmen region 3: feared violence.",
"b. condition name: distractor region 1: The city councilmen refused the demonstrators a permit because region 2: the demonstrators region 3: feared violence.",
"Following Trinh and Le (2019) and Radford et al. (2019), we compare the full version (comparing the mean surprisal over all tokens) and a partial version (comparing the surprisal for region 3 ).",
"Results.",
"The last two columns in Table 2 report the CD scores for the Winograd test suite.",
"As noted by Trinh and Le (2019), the difference in language model scores is more obvious in the region following the inserted correct or distracting entity.",
"We are able to reproduce these results in our setting, which supports the applicability of the CoherenceGym approach.",
"Radford et al. (2019) demonstrate that the performance on this task can be increased by adding more parameters to the model.",
"We will inspect the impact of model sizes on the different test suites more closely in future work.",
"Hypothesis: Different referring expressions reflect both the accessibility and salience status of the entities being referred.",
"For keeping in topic however, entities need only to be re-mentioned, regardless of their form.",
"In this sense, language models are insensitive to the use of different referring expressions.",
"In line with theories proposing an accessibility hierarchy that position pronouns requiring the highest level of accessibility and lexical noun phrases (undefinites and definites) the lowest level (Givn, 1983; Ariel, 2004, cf.), we test whether language models capture a violation in the use of referring expressions according to their accessibility status.",
"For this test suite, we work with the ARRAU corpus (Uryupina et al., 2020).",
"In contrast to other coreference corpora, ARRAU is multi-genre including news, dialogue and fiction texts and provides annotations for non-nominal anaphora such as discourse deixis.",
"We extract coreferential chains whose mentions span consecutive sentences and with at least one pronominal mention.",
"The test suites examples consist of minimal pairs (6) where a same context sentence in region 1 containing the antecedent is followed by the sentence with the original pronoun re-mentioning the antecedent or by a manipulated sentence in which the pronoun is replaced by a repetition of the antecedent in region 2 .",
"a. condition name: pronoun region 1: And there's a ladder coming out of the tree and there's a man at the top of the ladder region 2: you can't see him yet",
"b. condition name: repetition region 1: And there's a ladder coming out of the tree and there's a man at the top of the ladder region 2: you can't see the man at the top of the ladder yet In keeping with the accessibility theory, we have replaced the indefinite marker a with a definite the in the repetition condition.",
"Results.",
"The results show that when presented with a new lexical entity, neither model has a clear preference for a pronominal re-mention of the entity (Table 3).",
"The very nature of the language model will drive it to topic continuity, as it is designed to generate tokens based on a previous history.",
"However, this does not automatically ensures cohesion.",
"Both pronominalisation and repetition represent cohesive ties to the previous context recoverable from surface cues.",
"The difference is that the first involves a stronger link with the context, licensing the use of the pronoun, which the models WSJ VPC Dialogue Fiction GPT-2 0 .",
"Hypothesis: Meaning is constructed by building a representation for each new sentence based on the content of the previous sentences, and a first level of the coherence between two segments is embodied by explicit connectives.",
"Hence, an inappropriate connective between two segments will yield a content gap.",
"Sensitivity to content-meaning implies then sensitivity to a change in explicit connectives.",
"For this exercise, we work with Disco-Annotation (Popescu-Belis et al., 2012), a corpus of segments from the Europarl corpus (Koehn, 2005) annotated with discourse connective senses.",
"6 Eight discourse connectives are annotated in the corpus ( as, although, though, while, since, yet, however, meanwhile ), with one of five possible senses ( contrast, concession, causal, temporal, comparison ).",
"We excluded all examples where the connective is in a segment initial position, since the previous segment is not provided, a setting incompatible with our constraints.",
"This removed all examples of meanwhile .",
"A minimal pair is created from each segment (7), where all the tokens up to the connective are used as context, followed by the original connective or another connective from the set, and the continuation of the segment.",
"(7)",
"a. condition name: original region 1: We share the widespread outrage at its attitude to history, in particular World War II, but also its policies on enlargement, on immigration, on race and its attitude to the European Union itself.",
"We were also outraged, region 2: however region 3: , at the tolerance of the left 6 Europarl segments are either very long sentences formed by several clauses or by 2-3 sentences clustered together, as a product of the sentence alignment process.",
"for the tyranny, the terror and the excesses of the former USSR.",
"b. condition name: manipulated region 1: We share the widespread outrage at its attitude to history, in particular World War II, but also its policies on enlargement, on immigration, on race and its attitude to the European Union itself.",
"We were also outraged, region 2: since region 3: , at the tolerance of the left for the tyranny, the terror and the excesses of the former USSR.",
"Some connectives may have the same sense depending on the specific context in which they appear (Stede, 2012; Webber et al., 2003), for instance both since and while may bear a temporal interpretation.",
"On that account, we expect that a replacement with a different connective bearing a different sense leads to region 3 being more surprising than a different connective able to have the same sense.",
"Results.",
"Not all relations captured by the connectives are equally difficult, producing high variability in the scores, as shown in Table 4.",
"While temporal senses seem to be relatively unproblematic (scores about 0.85 on average, GPT-2), con-trast', concession' and in particular causal' senses are more difficult to distinguish ( since_causal and as_causal have averages of 0.66 and 0.52 respec-tively).",
"The results for as present an interesting contrast.",
"This connective can also be used as a preposition.",
"When the connectives with this particular sense are replaced, the models do not have any trouble recognising the original from the manipulated sentence, as suggested by the systematic high scores obtained, between 0.96 and 0.99.",
"In most other senses, however, scores plummet as low as 0.28.",
"We observe a similar pattern for yet when used as an adverb in the DIALOGPT model.",
"Hypothesis: While it is possible for different speakers to have different opinions, speakers should not contradict themselves.",
"This test suite targets the notion of speaker commitment in dialogue models.",
"The test suite is created automatically based on the DialogueNLI corpus (Welleck et al., 2019), which contains pairs of utterances annotated as contradiction, entailment or neutral.",
"The sentence pairs are extracted from the PERSONA-CHAT corpus introduced in Section 4.1.",
"The sentences can either be part of the conversation or the persona descriptions.",
"We extract the contradicting sentence pairs from the human verified test set, and create two conditions for each utterance pair, as illustrated below: (8)",
"a. condition name: speaker change region 1: since the beginning of the year, i am a nurse.",
"[SEP] region 2: i am a kindergarten teacher.",
"b. condition name: same speaker region 1: since the beginning of the year, i am a nurse.",
"region 2: i am a kindergarten teacher.",
"In the first condition, we simulate a speaker change by introducing a [SEP] token (which is converted to the tokenizer's separator token internally) in the dialogue history, whereas in the second condition the continuation is uttered by the same speaker as the context.",
"A model that is encoding some notion of speaker commitment should find the second utterance more surprising if no speaker change occurred.",
"As non-dialogue language models do not encode the notion of speaker change, this test suite only yields relevant results for dialogue models.",
"Results.",
"DIALOGPT shows a tendency towards finding contradictions within the same speaker more surprising.",
"A manual inspection of the data revealed that even though we use the human verified test set, there are quite some instances where the implications are not as clear, for example in the following two sentence pairs: (9)",
"a. \"my nurse skills come in handy when i volunteer.\" \"i am a kindergarten teacher.\"",
"b. \"i love art and want to be a famous artist.\" \"i am a kindergarten teacher.\"",
"This highlights the importance of quality over quantity.",
"In future work, we will inspect this phenomenon more closely and combine the selection of items with human evaluation, to gain a better understanding of how the notion of speaker commitment is and can be encoded in neural dialogue models.",
"We revisit the targeted evaluation paradigm and create test suites focusing on specific coherence phenomena.",
"Each test suite contains minimal pairs of sequences that illustrate a specific component of coherence.",
"We evaluate two transformer models for language and dialogue modelling based on the token level surprisal scores they assign to the coherent and incoherent versions.",
"Extending the existing SyntaxGym toolkit, we evaluate GPT-2 and DIALOGPT on our newly designed test suites on entity re-mention, explicit discourse connectives and speaker commitment in dialogue.",
"Existing test sets are also integrated easily, which we demonstrate for sentence order detection, Story Cloze and Winograd Schema resolution tasks.",
"Our results support previous work suggesting that the notion of coherence encoded in neural language models is more nuanced than the sentence order discrimination task can reflect.",
"The mixed results we get, with some manipulations (e.g. the different sense connective substitutions) easily being spotted by the tested models and others (e.g. how to re-mention entities, or speaker contradictions) posing to be more difficult, point to the value of such targeted evaluation, which eventually might help in pointing towards where the introduction of different inductive biases could in-crease a model's performance.",
"In this study, we focus on the English language.",
"However, our approach is not inherently designed for English alone.",
"While lm-zoo only contains English language models at the moment, other language models can be added easily.",
"The shuffling perturbations can be applied to any corpus.",
"Our other test suites are based on available annotated corpora, which require some familiarity with the language, but can in principle be applied in a similar fashion to resources in other languages, such as the Potsdam Commentary Corpus (Bourgonje and Stede, 2020) for German connectives, for example.",
"We leave a multilingual extension of our framework for future work.",
"Our next efforts will focus on adding more language and dialogue models to determine the impact of different model architectures and sizes.",
"Building additional test suites in order to capture a more thor-ough notion of coherence is also among our priorities.",
"Last, we plan to collect human judgements to evaluate our coherence manipulations more closely and to create an upper bound for what we can expect from neural models.",
"We thank Johann Seltmann and Jon Gauthier for their help with augmenting lm-zoo and syntaxgym .",
"We also thank the anonymous reviewers for their valuable feedback.",
"This work was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) Project ID 317633480 SFB 1287."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"result",
"method",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"objective",
"objective",
"result",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"other",
"other",
"other"
] |
[
"We investigate the less-explored task of generating open-ended questions that are typically answered by multiple sentences.",
"We first de-fine a new question type ontology which differentiates the nuanced nature of questions better than widely used question words.",
"A new dataset with 4 , 959 questions is labeled based on the new ontology.",
"We then propose a novel question type-aware question generation framework, augmented by a semantic graph representation, to jointly predict question focuses and produce the question.",
"Based on this framework, we further use both exemplars and automatically generated templates to improve controllability and diversity.",
"Experiments on two newly collected large-scale datasets show that our model improves question quality over competitive comparisons based on automatic metrics.",
"Human judges also rate our model outputs highly in answerability, coverage of scope, and overall quality.",
"Finally, our model variants with templates can produce questions with enhanced controllability and diversity.",
"Question-asking has long served as an effective instrument for knowledge learning (Andre, 1979; Tobin, 1990) and assessing learning progress (Holme, 2003; Downing and Yudkowsky, 2009; Livingston, 2009).",
"Compared to the widely studied task of generating factoid questions that inquire about one bit of information (Du et al., 2017; Duan et al., 2017; Li et al., 2019), this work is interested in generating open-ended questions that require deep comprehension and long-form answers (Labutov et al., 2015).",
"Such open-ended questions are valuable in education, e.g., to facilitate complex knowledge acquisition (Lai et al., 2017) and nurture reasoning skills (Shapley, 2000), as well as in other applications like improving search engines (Han Input : It's a difficult task to undertake.",
"Teenagers tend to identify gangs with fitting in.",
"Peer pressure plays a large part in it and sometimes teenagers have problems with their own identity being part of a gang deals with those issues.",
"It also provides a little bit of respect on the street ...",
"Significant progress has been made in generating factoid questions (Zhang and Bansal, 2019; Zhou et al., 2019b; Su et al., 2020), yet new challenges need to be addressed for open-ended questions.",
"First, specifying the question type is crucial for constructing meaningful questions (Graesser et al., 1992).",
"Question words such as why and when are generally seen as being indicative of types (Zhou et al., 2019b), but they underspecify the conceptual content of questions (Olney et al., 2012).",
"Using Figure 1 as an example, different question words, i.e., both how and what, can be used for inquiring about procedures.",
"It thus calls for a new question type ontology that can precisely capture the conceptual nature of questions .",
"Second, constructing questions from a text with multiple sentences needs to focus on its central concepts or phenomena that necessitate extensive descriptions .",
"New representations are needed to capture such content as question focus(es), to go beyond existing methods that rely on entities and their neighboring words (Du et al., 2017; Sun et al., 2018) even though they are effective for generating factoid questions.",
"Third, encouraging the diversity of generated questions (Sultan et al., 2020; Wang et al., 2020) is less explored but critical for real world applications, e.g., various questions should be proposed to gauge how well students grasp the knowledge of complex subjects.",
"In this work, we aim to address the challenges of generating open-ended questions from input consisting of multiple sentences.",
"We first introduce a new question type ontology , drawn upon researches in cognitive science and psychology (Graesser et al., 1992), to capture deeper levels of cognition, such as causal reasoning and judgments.",
"Based on the new ontology, we collect and annotate a dataset of 4 , 959 questions to benefit research in both question generation and answering.",
"1 We then design a type-aware framework to jointly predict question focuses (what to ask about) and generate questions (how to ask it).",
"Different from pipeline-based approaches (e.g., Sun et al. (2018)), our framework is built on large pre-trained BART (Lewis et al., 2020), and uses shared representations to jointly conduct question focus prediction and question generation while learning task-specific knowledge.",
"It is further augmented by a semantic graph that leverages both semantic roles and dependency relations, facilitating long text comprehension to pinpoint salient concepts.",
"Moreover, to achieve the goal of producing various types of questions from the same input, we investigate two model variants that use templates to improve controllability and generation diversity : one using pre-identified exemplars, the other employing generated templates to guide question writing, with sample outputs displayed in Figure 1. For experiments, we collect two new large-scale datasets consisting of open-ended questions with 1 Our data and code are available at: https:// shuyangcao.github.io/projects/ontology_open_ended_question .",
"answers from (1) Yahoo Answers 2 L6 dataset and (2) popular question-asking communities on Reddit 3 , consisting of 291 K and 720 K question-answer pairs, respectively.",
"Compared to existing popular QA datasets, such as SQuAD (Rajpurkar et al., 2016) and MS MARCO (Bajaj et al., 2016)), questions in our datasets ask about complex phenomena and perplexing social issues that seek solutions expressed in a long form.",
"Automatic metrics show that our type-aware question generation model outperforms competitive comparisons, highlighting the effectiveness of semantic graph-augmented representation and joint modeling of focus prediction and question generation.",
"Human judges also confirm that questions generated by our model have better overall quality.",
"Adding templates further promotes question diversity, as evaluated by both automatic evaluation and human assessment.",
"Question generation has long been studied to reduce human efforts in constructing questions for knowledge learning evaluation (Mitkov and Ha, 2003; Brown et al., 2005).",
"Early work relies on syntactic transformation to convert declarative sentences to questions (Heilman and Smith, 2010; Chali and Hasan, 2015).",
"Recent advancements rely on sequence-to-sequence models to generate a question from a given sentence or paragraph by considering the focus, type, and general-specific relations of questions (Sun et al., 2018; Zhou et al., 2019b; Krishna and Iyyer, 2019).",
"In particular, question likelihoods and rewards are designed to steer them toward being addressed by the given answers (Zhou et al., 2019a; Zhang and Bansal, 2019).",
"Attempts are also made toward creating complex questions that require multi-hop reasoning over the given text, and graph-based representations have been an enabling tool to facilitate the access to both entities and relations (Pan et al., 2020; Su et al., 2020).",
"While our model also enhances the input with a semantic graph, it boasts a richer representation by including both dependency and semantic relations, with predicted question focuses highlighted via extra node embeddings.",
"Moreover, we create a separate layer of cross attentions that is dedicated to the semantic graph, while prior work uses the same set of attentions to attend to the concatenated text and graph representations.",
"Given the data-driven nature of question generation and answering tasks, recent studies take advantage of the availability of large-scale QA datasets, such as SQuAD (Rajpurkar et al., 2016), MS MARCO (Bajaj et al., 2016), HotpotQA (Yang et al., 2018), DROP (Dua et al., 2019), inter alia.",
"These corpora mainly contain factoid questions, while our newly collected datasets are not only larger in size but also comprise significantly more open-ended questions for querying reasons and procedures.",
"A dataset closer to ours is ELI5 (Fan et al., 2019), which also obtains open-ended question-answer pairs from Reddit, while one of our datasets includes more Reddit communities and thus covers a wider range of topics.",
"Our work is more inline with generating deeper questions with responses that span over multiple sentences, where manually constructed templates are found effective (Olney et al., 2012).",
"For example, Labutov et al. (2015) use crowdsourcing to collect question templates based on an ontology derived from Wikipedia and Freebase topics.",
"Different from the topic-based ontology, our question types are more aligned with cognitive levels.",
"Moreover, our templates are automatically learned from training data.",
"Recent work (Rao and Daume III, 2018, 2019) focuses on asking clarification questions based on both retrieval and generation models.",
"As there has been no suitable framework for diverse types of questions, this work aims to fill the gap by introducing type-aware generation models which optionally leverage question templates for better controllability.",
"Generating diverse questions is much less studied, with existing approaches mainly focusing on entity replacement (Cho et al., 2019), sampling decoding (Sultan et al., 2020; Wang et al., 2020), and post-filtering (Liu et al., 2020).",
"However, the produced diversity is driven by word choice and syntax variation, with little ability to control on question types, which is the focus of this work.",
"To collect open-ended questions, we resort to online forums with active question-asking discussions.",
"Concretely, we gather and clean question-answer pairs from Reddit and Yahoo Answers, to train generators that construct questions by taking the corresponding answer as input.",
"We choose five popular Reddit communities: r/AskHistorians , r/Ask Politics , r/askscience , r/explainlikeimfive , and r/AskReddit , where open-ended questions are actively asked.",
"The original posts (OPs) are extracted, with their titles becoming questions.",
"We also keep the best answer with the highest karma (i.e., upvotes minus downvotes) if it is greater than 1. A second dataset with question-answer pairs is collected from the Yahoo Answers L6 corpus 4 , which covers a broader range of topics than the Reddit data.",
"For each question, the best answer is rated by the user who raises the question.",
"Preprocessing.",
"To ensure both questions and answers are well-formed, human inspection is conducted in multiple iterations to design rules to filter out improper samples.",
"For instance, we discard samples whose answers have less than 15 content words to avoid the inclusion of factoid question.",
"More details are provided in Table 6 in Appendix A. Ultimately, 719 , 988 question-answer pairs are kept for Reddit, and 290 , 611 for Yahoo.",
"Each dataset is then divided into train, validation and test sets with a 90% / 5% / 5% split.",
"The average lengths of questions and answers are 14 .",
"5 and 117 .",
"8 for Reddit, and 12 .",
"2 and 123 .",
"6 for Yahoo.",
"Our question type ontology is adopted and modified from Olney et al. (2012), where 18 categories are originally proposed for knowledge learning as-4",
"as-4 https://webscope.sandbox.yahoo.com/",
"sessment.",
"We recruited 6 native English speakers for three rounds of question type annotation.",
"Based on the annotators' feedback after each round, we refine the definitions, merge ambiguous types, and delete inapplicable categories.",
"For example, an initial EXPECTATION type is merged into CAUSE due to their similarities in seeking causality.",
"Finally, 10 types are preserved (Table 1).",
"As can be seen, our ontology is designed to better capture the nature of questions than question words.",
"Annotating Questions with Types.",
"After the annotation guideline is finalized, we ask the same set of annotators to label 5 , 000 ( 2 2 , 500 ) randomly sampled questions from both Reddit and Yahoo's training sets.",
"Each question is labeled by two annotators, with disagreements resolved through discussions.",
"After removing samples without consensus, the final dataset consists of 4 , 959 questions.",
"EXAMPLE questions are most prevalent, comprising 23 .",
"4% of samples, while only 2 .",
"6% are CONSEQUENCE questions.",
"A Krippendorff's of 0 .",
"67 is obtained for all samples, indicating a reasonable agreement level.",
"The annotation guideline and examples for each question type are shown in Table 12 in Appendix A. Training Question Type Classifiers.",
"Since our type-aware question generation model requires a specified type as input, here we describe how to build two question type classifiers: (1) q , that labels a type by reading the question and is used to provide question type labels during training ; (2) a , that predicts a type for use by taking the answer as input and is used during test .",
"Both classifiers are based on RoBERTa (Liu et al., 2019), where a prediction layer is built on top of the contextual representation of the [BOS] token to output question type probabilities.",
"q achieves a macro F1 score of 0 .",
"80 on a reserved test set, with data splits detailed in Appendix B. To train a , in addition to the annotated questions, we run q on unlabeled questions in Reddit and Yahoo and include samples whose type prediction confidence score is above 0 .",
"9 .",
"We train one a for each dataset.",
"a obtains macro F1 scores of 0 .",
"48 and 0 .",
"46 on the same reserved test set over all types after training on Yahoo and Reddit, respectively.",
"After running q on both datasets, we find that Reddit has significantly more EXAMPLE questions ( 43 . 8% of all samples).",
"Yahoo dataset is more balanced, with PROCEDURAL questions being the most frequent type ( 19 . 9% of all samples).",
"Distri-Encoder SelfAttn CrossAttn CrossAttn DecoderLayer Question [EXPLGEN ]: Why does music sound louder sometimes?",
"In this section, we present our type-aware question generation framework.",
"As shown in Figure 2, our model takes in a multi-sentence text and a predicted question type.",
"Built on shared input representations, it first detects question focuses from a semantic graph, and then generates the question ( 4.1).",
"We also propose two model variants that consider automatically extracted template exemplars or generated templates to achieve controllability ( 4.2), enabling the generation of diverse questions.",
"Our generator is built on top of BART (Lewis et al., 2020).",
"To facilitate the detection of salient content (i.e., focuses) to raise questions, we first augment the encoder with a semantic graph that consists of both dependency relations and semantic roles, capturing semantic relations over different scopes with varying granularities.",
"Question focuses are first detected based on the semantic graph, which then guide question generation via cross-attentions, as shown in Figure 2. Although the joint modeling of focus prediction and question generation has been studied before, our design differs by using shared representations consisting of the input text and semantic graph, and the prediction of focuses are included through gating mechanisms, whereas previous work, e.g. Pan et al. (2020), simply employs multi-task learning.",
"Below, we first describe constructing the semantic graph-augmented encoder, followed by the joint modeling of two tasks.",
"Improving Long Text Comprehension with Semantic Graph.",
"To construct the semantic graph, for each sentence, we start with obtaining its dependency tree using Stanford CoreNLP (Manning et al., 2014).",
"To better highlight core concepts, we discard less important relations, e.g., auxiliaries.",
"The full list is included in Appendix C. Since our goal is to detect central concepts that are well connected with many other words, we can remove relations on the edges to minimize the number of parameters to learn.",
"Moreover, as semantic roles can indicate main entities (Mannem et al., 2010), we extract semantic roles and their relations with AllenNLP (Shi and Lin, 2019).",
"To merge the two sources of information, we add an edge in the dependency tree to connect the head word of the predicate and the head word of each semantic role.",
"To build a connected graph from the multi-sentence input, we add an edge between each sentence's last token and the next sentence's first token.",
"Finally, we merge nodes with the same surface forms or with corefered mentions.",
"To the best of our knowledge, this is the first time that both dependency and semantic relations are encoded in the same graph for question generation, and with enhanced connectivity of the constructed graph, our design can better signal content salience.",
"Joint Modeling with Cross-attentions.",
"Given a predicted question type t and a multi-sentence text x = { x 1 , , x n } , the BART encoder builds the contextual representation H = { h 0 , h 1 , , h n } at the last layer, where h 0 is for t .",
"To encode the semantic graph, we initialize the node representation for node v i by taking the average contextual representations of its tokens and appending four bits encoding the number of nodes (capped at 10 ) that are merged into v i , to add frequency information.",
"This step yields new node representations v (0) i .",
"We then apply graph attention networks (GATs) (Velickovic et al., 2018) of L layers to update the representations as follows: v ( l ) i = (cid:88) j N i a i,j W ( l ) v ( l 1) j (1) where W ( l ) is a learnable parameter for the l -th layer, and N i denotes the neighbors of v i .",
"The attention score a i,j is calculated as in GATs.",
"We use L = 2 for experiments.",
"To predict focuses , the final node representation v ( L ) i is fed into the following feedforward network, yielding the probability of v i being a focus as: p focus ( v i = 1) = ( W 1 tanh( W 2 v ( L ) i )) (2) where W 1 and W 2 are learnable parameters.",
"Bias terms are omitted for simplicity.",
"We construct ground-truth labels by treating a node as a focus if it contains words used in the question.",
"To generate the question , we use the gating mechanism to inform the focus prediction results, where new node representations after being weighted by the focus probability are: v ( L ) (cid:48) i = g i (cid:12) v ( L ) i g i = p focus ( v i = 1) (3) Our model benefits from both large pre-training and hybrid semantic graphs by adding a separate cross attention for node presentations in each BART decoder layer.",
"We then design separate cross attentions to attend (1) the output of the BART encoder, yielding z e , and (2) the node representations V ( L ) (cid:48) , producing z v , which are formulated as: z e = LN ( z s + Attn ( z s , H )) (4) z v = LN ( z e + Attn ( z e , V ( L ) (cid:48) )) (5) z (cid:48) = LN ( z v + FFN ( z v )) (6) where z s denotes the output of self attentions for the current layer, and z (cid:48) is the output for the layer.",
"Attn ( , ) denotes the multi-head attention operation as in Vaswani et al. (2017), FFN ( ) a feedforward layer, and LN ( ) is layer normalization.",
"Our final training objective accounts for both focus prediction and question generation objectives with equal weights.",
"An important goal of this work is to enable the generation of questions of diverse types.",
"However, simply adding question type as input is insufficient (discussed in 5).",
"We thus propose to leverage question templates to gain stronger controllability.",
"Below we first present how to automatically extract templates from the training set, and then introduce two model variants that are built on the JOINTGEN framework: EXPLGEN uses exemplar templates to guide the model to generate questions of selected types, and TPLGEN adds an extra step to first generate type-specific templates.",
"Template Extraction.",
"While collecting templates specific to a given type, we need to ensure they remain topic-independent to be generalizable to different domains.",
"To this end, we replace a word in the question with a template token that indicates its syntax function, e.g., [V] for a verb, if it appears in the answer after lemmatization.",
"We further consider topically related words in the questions, by calculating word-level semantic similarities based on Numberbatch word embeddings (Speer et al., 2017), which are found to perform better on our datasets than other embeddings.",
"Concretely, for each word in the answer, we replace the most similar word in the question with the template token.",
"This process is repeated until 80% of content words in questions are replaced.",
"Finally, for each noun phrase, adjective phrase, and adverb phrase, if its head word has been replaced, the whole phrase is transformed into a phrase type token.",
"For instance, a question What are the differences between global warming and climate change? becomes What are the differences between [NP] and [NP] ?",
"Exemplars for Guidance (EXPLGEN ).",
"Our first model variant considers adding a template exemplar for the given type as additional input, which provide more specific information to control the type of generated questions.",
"Figure 2 shows one such example.",
"To identify exemplars, we use templates with frequencies above 20 on Yahoo and 50 on Reddit.",
"We then manually inspect these templates and remove the ones with topic-specific words, resulting in 66 exemplars for all types.",
"They are listed in Table 10 in Appendix D. During training, we choose the exemplar that has the lowest edit distance with the question, which is also used for training an exemplar selector based on RoBERTa.",
"During testing, the exemplar with the highest selector score is used.",
"The accuracy of the exemplar selector for each question type on the test set is reported in Table 11 in Appendix D. Generated Templates for Guidance (TPLGEN ).",
"We further propose another model variant where we generate a new template and feed it (instead of an exemplar template as in EXPLGEN ) as part of the question generation input.",
"Specifically, we reuse EXPLGEN to learn to generate a target template, as derived from the template extraction procedure.",
"During question realization, TPLGEN uses a BART-based generator that takes as input the question type, the input text, the generated template, and the words that are predicted as focuses.",
"We use separate cross attentions to attend the representations of the focused words, similar to how node representations are attended in JOINTGEN .",
"We recognize that having separate stages of exemplar selection and template generation introduces extra model training cost and potential errors in the pipeline.",
"This work, however, focuses on improving the controllability as well as diversity of question generation, and we will leave the building of more efficient models in the future work.",
"Comparisons and Metrics.",
"We compare with DEEPQG (Pan et al., 2020), a model that uses dependency graphs for multi-hop question generation.",
"We also compare with BART models that are fine-tuned on the same datasets as in our models, by using inputs of (1) the answer (BART), (2) the answer and a predicted question word (BART+QW ORD ), and (3) the answer and a predicted question type (BART+QT YPE ).",
"For BART+QW ORD , the question word is predicted by a RoBERTa classifier that considers the answer and is trained on our training sets.",
"We follow Liu et al. (2020) and use 9 categories of question words.",
"For both our models and BART+QT YPE , the most confident type predicted by the classifier a (described in 3.2), which reads in the answer, is used as input.",
"To test the efficacy of semantic graphs, we further compare with a variant of JOINTGEN that only uses the flat Transformer for focus prediction and question generation, denoted as JOINTGEN w/o graph.",
"and Agarwal, 2007), and ROUGE-L (Lin, 2004).",
"5 Results on both Yahoo and Reddit datasets are reported in Table 2. Our JOINTGEN outperforms all comparisons on both datasets over all automatic evaluation metrics except for METEOR on Reddit.",
"When taking out the semantic graphs, model performance degrades substantially, which suggests that 5 We do not consider using Q-BLEU (Nema and Khapra, 2018) since it weighs question words highly.",
"having structured representation is useful for focus detection and the final question generation task.",
"We also observe a huge performance gap between DEEPQG and systems based on BART, signifying the importance of leveraging pre-trained models for open-ended question generation.",
"Meanwhile, adding question types helps BART generate more relevant questions than using question words, indicating the value of our new question type ontology.",
"Notably, our template-based generators, EXPLGEN and TPLGEN , which are trained to comply with the given templates, still produce comparable scores.",
"This highlights the possibility to control the generated questions' types and syntax as demonstrated by the templates, without performance loss .",
"Question Diversity Evaluation.",
"Next, we exam-2 3 4 5 6 7 8 9 # of Given Types 2 3 4 5 6 7 # o f U n i q u e T y p e s Yahoo 2 3 4 5 6 7 8 9 # of Given Types 2 3 4 5 6 7 Reddit BART + QWord JointGen EXPLGen TPLGen Figure 3: Number of unique types of the generated questions (Y-axis), when different numbers of question types are specified (X-axis).",
"ine the controllability of models by specifying different question types as input.",
"The top 9 confident types 6 predicted by our type predictor a are used as input to our models, producing 9 questions for evaluation.",
"For BART, we use nucleus sampling (Holtzman et al., 2020) with k = 10 and p = 0 .",
"7 to sample diverse questions.",
"To evaluate, we first calculate the question type accuracy by comparing whether the types of the generated questions match the specified ones, with types labeled by our classifier q ( 3.2).",
"We then report the average numbers of unique question types in the 9 generated questions per sample, with higher number indicating better controllability.",
"Finally, we consider pairwise BLEU-4 (Cho et al., 2019) by computing the BLEU-4 between pairwise generated questions per sample, where lower values suggest higher content diversity.",
"First, our EXPLGEN and TPLGEN can generate questions with diverse types and content , as shown by the significantly higher numbers of unique types than all comparisons and lower pairwise BLEU scores than comparisons except for BART with nucleus sampling in Table 3. This implies stronger type control by template-based generators, compared to BART+QT YPE and JOINTGEN which only use the question type token as input.",
"Results on numbers of unique types by varying numbers of question types specified in the input are displayed in Figure 3, where EXPLGEN and TPLGEN maintain steady controllability.",
"Second, our question type ontology provides a new perspective for question diversity evaluation.",
"Among the comparisons, although BART with nucleus sampling and BART+QW ORD both have low pairwise BLEU, the types of questions they can generate are limited.",
"6 9 types are chosen because we only have 9 categories of question words for BART+QW ORD .",
"Question Diversity.",
"We hire three annotators who have participated in our question type annotation study to evaluate 80 groups of questions generated by four selected models on each dataset.",
"For each group, we randomly sample an answer and indicate three most probably question types to each model, to generate three corresponding questions.",
"For each sample, the annotators are asked to rank the four models from 1 (highest) to 4 (lowest) on three aspects of diversities: type whether the three generated questions have different types, syntax whether they use different syntax, and answer content whether the three questions need to be addressed with different answers.",
"Ties are allowed.",
"We find that human judges rate questions generated by our EXPLGEN and TPLGEN as having greater diversities over all aspects , except for syntax diversity on Reddit, as shown in Table 4. Among the two model variants, questions by TPLGEN yield more diverse answers.",
"Based on our observation, TPLGEN uses automatically generated templates to produce more focused questions with different answers, compared to EXPLGEN which employs exemplars.",
"This shows the promise of using automatically generated templates to create questions that need to be addressed with different answers.",
"Besides Figure 1, we show more sample outputs in Figure 4, where EXPLGEN and TPLGEN exhibit stronger controllability than JOINTGEN .",
"Question Content Quality.",
"We use the same set of human judges to evaluate another 80 groups of questions output by five selected models and the reference.",
"Three aspects are rated from 1 (worst) Answer: My sister in law and her husband genetically modified their second child because the first has EB.",
"They eliminated that and had a baby that gets to live pain free.",
"Under the right circumstances, I'm all for it ...",
"JOINTGEN [PROCEDURAL ] How would you feel about genetically modified babies?",
"to 5 (best): appropriateness whether the question is semantically correct, without considering the answer; answerability whether the question can be addressed by the given answer; and scope whether the question is related to a longer span of the answer (global scope) or focuses on local content (e.g., one phrase or one sentence).",
"We further ask the annotators to rank questions based on their overall quality and preferences, with ties allowed.",
"As shown in Table 5, our JOINTGEN model produces questions with better answerability and that cover broader content in the answers.",
"It is also rated as the best in more than half of the evaluation instances on both datasets.",
"Between BART+QW ORD and BART+QT YPE , human judges rate the system outputs that conditioned on our question types to have better overall quality.",
"Does focus prediction correlate with question quality?",
"We first investigate the relationship between focus prediction and question generation by using our joint model JOINTGEN .",
"As can be seen from Figure 5, there is a strong correlation between F1 scores of focus prediction and BLEU-4 as well Model Appro.",
"as ROUGE-L, where samples in the Yahoo and Reddit test sets are grouped into 8 bins based on the F1 scores.",
"The Pearson correlation coefficients between BLEU-4 and focus F1 are 0 .",
"29 on Yahoo and 0 .",
"26 on Reddit.",
"For ROUGE-L, the correlation coefficients are 0 .",
"35 on Yahoo and 0 .",
"34 on Reddit.",
"All the correlations have p < 10 5 .",
"The strong positive correlations imply the importance of accurate focus prediction for open-ended question generation.",
"We also show the F1 scores and BLEU-4 for selected question types on the right of Figure 5, again demonstrating the effect of focus detection on question quality.",
"When do our models fail to respect the given types?",
"Next, we provide insights into which types of questions are challenging to generate by using our template-based models EXPLGEN and TPLGEN .",
"Both variants frequently fail to respect the given question type of VERIFICATION , in which cases they often produce JUDGEMENTAL questions.",
"They also tend to confuse EXAMPLE and EXTENT with CONCEPT questions.",
"After manually inspecting 50 generated questions for the aforementioned three types, we find that many of them can be labeled with both types, thus creating confusion for our classifier.",
"For instance, What are the import restrictions in the US? can be considered as either 0.4 0.6 0.8 1.0 Focus F1 20 30 40 50 BLEU-4 ROUGE-L Disj.",
"We present a new question type ontology which better captures the nuances of questions to support the study of open-ended question generation.",
"We further annotate a new dataset with 4 , 959 questions based on the proposed ontology.",
"We describe a joint question focus detection and question generation framework with a novel semantic graph-augmented representation, which is directly built on large pre-trained models.",
"Based on this framework, we also enhance the controllability and diversity of generated questions by employing template exemplars or automatically generated templates.",
"Experiments on two large datasets show that questions generated by our models have better quality and higher diversity than non-trivial comparisons, with similar results rated by human judges.",
"This research is supported in part by National Science Foundation through Grants IIS-1813341 and a CAREER award IIS-2046016.",
"We thank three anonymous reviewers, area chair, and senior area chairs for their valuable suggestions for improving various aspects of this work.",
"Large models that are pre-trained on heterogeneous web data are shown to encode biases and can be potentially harmful for marginalized populations.",
"While the automatically learned templates improve controllability in question generation, we also recognize that our system might be misused to create questions that contain objectionable content.",
"We therefore advocate cautious and responsible practices in real-world deployment.",
"Our data collection process for the two new datasets involves removing samples with abusive languages and human inspection on random samples.",
"Given the data volume, however, we cannot exhaustively verify that all records are free of potentially offensive content."
] | [
"objective",
"objective",
"abstain",
"objective",
"result",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"objective",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"objective",
"method",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Style transfer is the task of automatically transforming a piece of text in one particular style into another.",
"A major barrier to progress in this field has been a lack of training and evaluation datasets, as well as benchmarks and automatic metrics.",
"In this work, we create the largest corpus for a particular stylistic transfer (formality) and show that techniques from the machine translation community can serve as strong baselines for future work.",
"We also discuss challenges of using automatic metrics.",
"One key aspect of effective communication is the accurate expression of the style or tone of some content.",
"For example, writing a more persuasive email in a marketing position could lead to increased sales; writing a more formal email when applying for a job could lead to an offer; and writing a more polite note to your future spouse's parents, may put you in a good light.",
"Hovy (1987) argues that by varying the style of a text, people convey more information than is present in the literal meaning of the words.",
"One particularly important dimension of style is formality (Heylighen and Dewaele, 1999).",
"Automatically changing the style of a given content to make it more formal can be a useful addition to any writing assistance tool.",
"In the field of style transfer, to date, the only available dataset has been for the transformation of modern English to Shakespeare, and it led to the application of phrase-based machine translation (PBMT) (Xu et al., 2012) and neural machine translation (NMT) (Jhamtani et al., 2017) models to the task.",
"The lack of an equivalent or larger dataset for any other form of style transfer has blocked progress in this field.",
"Moreover, prior This research was performed when the first author was at Grammarly.",
"work has mainly borrowed metrics from machine translation (MT) and paraphrase communities for evaluating style transfer.",
"However, it is not clear if those metrics are the best ones to use for this task.",
"In this work, we address these issues through the following three contributions: Corpus: We present Grammarly's Yahoo Answers Formality Corpus (GYAFC), the largest dataset for any style containing a total of 110K informal / formal sentence pairs.",
"Table 1 shows sample sentence pairs.",
"Benchmarks: We introduce a set of learning models for the task of formality style transfer.",
"Inspired by work in low resource MT, we adapt existing PBMT and NMT approaches for our task and show that they can serve as strong benchmarks for future work.",
"Metrics: In addition to MT and paraphrase metrics, we evaluate our models along three axes: formality , fluency and meaning preservation using existing automatic metrics.",
"We compare these metrics with their human judgments and show there is much room for further improvement.",
"Informal: I'd say it is punk though.",
"Formal: However, I do believe it to be punk.",
"Informal: Gotta see both sides of the story.",
"Formal: You have to consider both sides of the story.",
"In this paper, we primarily focus on the informal to formal direction since we collect our dataset for this direction.",
"However, we evaluate our models on the formal to informal direction as well.",
"1 All data, model outputs, and evaluation results have been made public 2 in the hope that they will encourage more research into style transfer.",
"In the following two sections we discuss related work and the GYAFC dataset.",
"In 4, we detail our rule-based and MT-based approaches.",
"In 5, we describe our human and automatic metric based evaluation.",
"In 6, we describe the results of our models using both human and automatic evaluation and discuss how well the automatic metrics correlate with human judgments.",
"Style Transfer with Parallel Data: Sheikha and Inkpen (2011) collect pairs of formal and informal words and phrases from different sources and use a natural language generation system to generate informal and formal texts by replacing lexical items based on user preferences.",
"Xu et al. (2012) (henceforth XU",
"12) was one of the first works to treat style transfer as a sequence to sequence task.",
"They generate a parallel corpus of 30K sentence pairs by scraping the modern translations of Shakespeare plays and train a PBMT system to translate from modern English to Shakespearean English.",
"3 More recently, Jhamtani et al. (2017) show that a copy-mechanism enriched sequence-to-sequence neural model outperforms XU 12 on the same set.",
"In text simplification, the availability of parallel data extracted from English Wikipedia and Simple Wikipedia (Zhu et al., 2010) led to the application of PBMT (Wubben et al., 2012a) and more recently NMT (Wang et al., 2016) models.",
"We take inspiration from both the PBMT and NMT models and apply several modifications to these approaches for our task of transforming the formality style of the text.",
"Style Transfer without Parallel Data: Another direction of research directly controls certain attributes of the generated text without using parallel data.",
"Hu et al. (2017) control the sentiment and the tense of the generated text by learning a disentangled latent representation in a neural generative model.",
"Ficler and Goldberg (2017) control several linguistic style aspects simultaneously by conditioning a recurrent neural network language model on specific style (pro-fessional, personal, length) and content (theme, sentiment) parameters.",
"Under NMT models, Sennrich et al. (2016a) control the politeness of the translated text via side constraints, Niu et al. (2017) control the level of formality of MT output 3 https://github.com/cocoxu/Shakespeare by selecting phrases of a requisite formality level from the k-best list during decoding.",
"In the field of text simplification, more recently, Xu et al. (2016) learn large-scale paraphrase rules using bilingual texts whereas Kajiwara and Komachi (2016) build a monolingual parallel corpus using sentence similarity based on alignment between word embeddings.",
"Our work differs from these methods in that we mainly address the question of how much leverage we can derive by collecting a large amount of informal-formal sentence pairs and build models that learn to transfer style directly using this parallel corpus.",
"Identifying Formality: There has been previous work on detecting formality of a given text at the lexical level (Brooke et al., 2010; Lahiri et al., 2011; Brooke and Hirst, 2014; Pavlick and Nenkova, 2015), at the sentence level (Pavlick and Tetreault, 2016) and at the document level (Sheikha and Inkpen, 2010; Peterson et al., 2011; Mosquera and Moreda, 2012).",
"In our work, we reproduce the sentence-level formality classifier introduced in Pavlick and Tetreault (2016) (PT16) to extract informal sentences for GYAFC creation and to automatically evaluate system outputs.",
"Evaluating Style Transfer: The problem of style transfer falls under the category of natural language generation tasks such as machine translation, paraphrasing, etc.",
"Previous work on style transfer (Xu et al., 2012; Jhamtani et al., 2017; Niu et al., 2017; Sennrich et al., 2016a) has re-purposed the MT metric BLEU (Papineni et al., 2002) and the paraphrase metric PINC (Chen and Dolan, 2011) for evaluation.",
"Additionally, XU 12 introduce three new automatic style metrics based on cosine similarity, language model and logistic regression that measure the degree to which the output matches the target style.",
"Under human based evaluation, on the other hand, there has been work on a more fine grained evaluation where human judgments were separately collected for adequacy, fluency and style (Xu et al., 2012; Niu et al., 2017).",
"In our work, we conduct a more thorough evaluation where we evaluate model outputs on the three criteria of formality , fluency and meaning using both automatic metrics and human judgments.",
"3.1 Creation Process Yahoo Answers, 4 a question answering forum, contains a large number of informal sentences and allows redistribution of data.",
"Hence, we use the Yahoo Answers L6 corpus 5 to create our GYAFC dataset of informal and formal sentence pairs.",
"In order to ensure a uniform distribution of data, we remove sentences that are questions, contain URLs, and are shorter than 5 words or longer than 25.",
"After these preprocessing steps, 40 million sentences remain.",
"The Yahoo Answers corpus consists of several different domains like Business, Entertainment & Music, Travel, Food, etc.",
"PT16 show that the formality level varies significantly across different genres.",
"In order to control for this variation, we work with two specific domains that contain the most informal sentences and show results on training and testing within those categories.",
"We use the formality classifier from PT16 to identify informal sentences.",
"We train this classifier on the Answers genre of the PT16 corpus which consists of nearly 5,000 randomly selected sentences from Yahoo Answers manually annotated on a scale of -3 (very informal) to 3 (very for-mal).",
"6 We find that the domains of Entertainment & Music and Family & Relationships contain the most informal sentences and create our GYAFC dataset using these domains.",
"Table 2 shows the number of formal and informal sentences in all of Yahoo Answers corpus and within the two selected domains.",
"Sentences with a score less than 0 are considered as informal and sentences with a score greater than 0 are considered as formal.",
"Next, we randomly sample a subset of 53,000 informal sentences each from the Entertainment & Music (E&M) and Family & Relationships (F&R) categories and collect one formal rewrite per sentence using Amazon Mechanical Turk.",
"The workers are presented with detailed instructions, as well 4 https://answers.yahoo.com/answer 5 https://webscope.sandbox.yahoo.com/",
"Informal to Formal Formal to Informal",
"as examples.",
"To ensure quality control, four experts, two of which are the authors of this paper, reviewed the rewrites of the workers and rejected those that they felt did not meet the required standards.",
"They also provided the workers with reasons for rejection so that they would not repeat the same mistakes.",
"Any worker who repeatedly performed poorly was eventually blocked from doing the task.",
"We use this train set to train our models for the style transfer tasks in both directions.",
"Since we want our tune and test sets to be of higher quality compared to the train set, we recruit a set of 85 expert workers for this annotation who had a 100% acceptance rate for our task and who had previously done more than 100 rewrites.",
"Further, we collect multiple references for the tune/test set to adapt PBMT tuning and evaluation techniques to our task.",
"We collect four different rewrites per sentence using our expert workers by randomly assigning sentences to the experts until four rewrites for each sentence are obtained.",
"7 To create our tune and test sets for the informal to formal direction, we sample an additional 3,000 informal sentences for our tune set and 1,500 sentences for our test set from each of the two domains.",
"To create our tune and test sets for the formal to informal direction, we start with the same tune and test split as the first direction.",
"For each formal rewrite 8 from the first direction, we collect three different informal rewrites using our expert workers as before.",
"These three informal rewrites along with the original informal sentence become our set of four references for this direction of the task.",
"Table 3 shows the exact number of sentences in our train, tune and test sets.",
"The following quantitative and qualitative analyses are aimed at characterizing the changes between the original informal sentence and its formal",
"7 Thus, note that the four rewrites are not from the same four workers for each sentence 8 Out of four, we pick the one with the most edit distance with the original informal.",
"Rationale explained in Section 3.2 131 rewrite in the GYAFC train split.",
"Quantitative Analysis: While rewriting sentences more formally, humans tend to make a wide range of lexical/character-level edits.",
"In Figure 1, we plot the distribution of the character-level Lev-enshtein edit distance between the original informal and the formal rewrites in the train set and observe a standard deviation of = 19 .",
"39 with a mean = 28 .",
"85 .",
"Next, we look at the difference in the formality level of the original informal and the formal rewrites in GYAFC.",
"We find that the classifier trained on the Answers genre of PT16 dataset correlates poorly (Spearman = 0.38) with human judgments when tested on our domain specific datasets.",
"Hence, we collect formality judgments on a scale of -3 to +1, similar to PT16, for an additional 5000 sentences each from both domains and obtain a formality classifier with higher correlation (Spearman = 0.56).",
"We use this retrained classifier for our evaluation in 5 as well.",
"formality scores on the original informal sentence and their formal rewrites in the train set and observe an increase in the mean formality score as we go from informal ( 1 . 06 ) to formal rewrites ( 0 . 12 ).",
"As compared to edit distance and formality, we observe a much lower variation in sentence lengths with the mean slightly increasing from informal ( 11 . 93 ) to their formal rewrites ( 12 . 56 ) in the train set.",
"Qualitative Analysis: To understand what stylistic choices differentiate formal from informal text, we perform an analysis similar to PT16 and look at 50 rewrites from both domains and record the frequency of the types of edits that workers made when creating a more formal sentence.",
"10 In contrast to PT16, we observe a higher percentage of phrasal paraphrases (47%), edits to punctuations (40%) and expansion of contractions (12%).",
"This is reflective of our sentences coming from very informal domains of Yahoo Answers.",
"Similar to PT16, we also observe capitalization (46%) and normalization (10%).",
"We experiment with three main classes of approaches: a rule-based approach, PBMT and NMT.",
"Inspired by work in low resource machine translation, we apply several modifications to the standard PBMT and NMT models and create a set of strong benchmarks for the style transfer community.",
"We apply these models to both directions of style transfer: informal to formal and formal to informal .",
"In our description, we refer to the two styles as source and target .",
"We summarize the models below and direct the reader to supplementary material for further detail.",
"Corresponding to the category of edits described in 3.2, we develop a set of rules to automatically make an informal sentence more formal where we capitalize first word and proper nouns, remove repeated punctuations, handcraft a list of expansion for contractions etc.",
"For the formal to informal direction, we design a similar set of rules in the opposite direction.",
"Phrased-based machine translation models have had success in the fields of machine translation, style transfer (XU",
"12) and text simplification (Wubben et al., 2012b; Xu et al., 2016).",
"Inspired by work in low resource machine translation, we use a combination of training regimes to develop our model.",
"We train on the output of the rule-based approach when applied to GYAFC.",
"This is meant to force the PBMT model to learn generalizations outside the rules.",
"To increase the data size, we use self-training (Ueffing, 2006), where we use the PBMT model to translate the large number of in-domain sentences from GYAFC belonging to the the source style and use the resultant output to retrain the PBMT model.",
"Using sub-selection, we only select rewrites that have an Lev-enshtein edit distance of over 10 characters when compared to the source to encourage the model to be less conservative.",
"Finally, we upweight the rule-based GYAFC data via duplication (Sennrich et al., 2016b).",
"For our experiments, we use Moses (Koehn et al., 2007).",
"We train a 5-gram language model using KenLM (Heafield et al., 2013), and use target style sentences from GYAFC and the sub-sampled target style sentences from out-of-domain Yahoo Answers, as in Moore and Lewis (2010), to create a large language model.",
"While encoder-decoder based neural network models have become quite successful for MT(Sutskever et al., 2014; Bahdanau et al., 2014; Cho et al., 2014), the field of style transfer, has not yet been able to fully take advantage of these advances owing to the lack of availability of large parallel data.",
"With GYAFC we can now show how well NMT techniques fare for style transfer.",
"We experiment with three NMT models: NMT baseline: Our baseline model is a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) encoder-decoder model with attention (Bahdanau et al., 2014).",
"11 We pretrain the input word embeddings on Yahoo Answers using GloVE (Pennington et al., 2014).",
"As in our PBMT based approach, we train our NMT baseline model on the output of the rule-based approach when applied to GYAFC.",
"NMT Copy: Jhamtani et al., (2017) introduce a copy-enriched NMT model for style transfer to better handle stretches of text which should not be changed.",
"We incorporate this mechanism into our NMT Baseline.",
"NMT Combined: The size of our parallel data is smaller than the size typically used to train NMT models.",
"Motivated by this fact, we propose several variants to the baseline models that we find helps minimize this issue.",
"We augment the data used to train NMT Copy via two techniques:",
"1) we run the PBMT model on additional source data, and",
"2) we use back-translation (Sennrich et al., 2016c) of the PBMT model to translate the large number of in-domain target style sentences from GYAFC.",
"To balance the over one million artificially generated pairs from the respective techniques, we upweight the rule-based GYAFC data via duplication.",
"12 5 Evaluation As discussed earlier, there has been very little research into best practices for style transfer evaluation.",
"Only a few works have included a human evaluation (Xu et al., 2012; Jhamtani et al., 2017), and automatic evaluations have employed BLEU or PINC (Xu et al., 2012; Chen and Dolan, 2011), which have been borrowed from other fields and not vetted for this task.",
"In our work, we conduct a more thorough and detailed evaluation using both humans and automatic metrics to assess transformations.",
"Inspired by work in the paraphrase community (Callison-Burch, 2008), we solicit ratings on how formal, how fluent and how meaning-preserving a rewrite is.",
"Additionally, we look at the correlation between the human judgments and the automatic metrics.",
"We perform human-based evaluation to assess model outputs on the four criteria: formality , fluency , meaning and overall .",
"For a subset of 500 sentences from the test sets of both Entertainment & Music and Family & Relationship domains, we collect five human judgments per sentence per criteria using Amazon Mechanical Turk as follows: 12 Training data sizes for different methods are summarized in the supplementary material.",
"Formality: Following PT16, workers rate the formality of the source style sentence, the target style reference rewrite and the target style model outputs on a discrete scale of -3 to +3 described as: -3: Very Informal, -2: Informal, -1: Somewhat Informal, 0: Neutral, 1: Somewhat Formal, 2: Formal and 3: Very Formal .",
"Fluency: Following Heilman et al. (2014), workers rate the fluency of the source style sentence, the target style reference rewrite and the target style model outputs on a discrete scale of 1 to 5 described as: 5: Perfect, 4: Comprehensible, 3: Somewhat Comprehensible, 2: Incomprehensible .",
"We additionally provide an option of 1: Other for sentences that are incomplete or just a fragment.",
"Meaning Preservation: Following the annotation scheme developed for the Semantic Textual Similarity (STS) dataset (Agirre et al., 2016), given two sentences i.e. the source style sentence and the target style reference rewrite or the target style model output, workers rate the meaning similarity of the two sentences on a scale of 1 to 6 described as: 6: Completely equivalent, 5: Mostly equivalent, 4: Roughly equivalent, 3: Not equivalent but share some details, 2: Not equivalent but on same topic, 1: Completely dissimilar .",
"Overall Ranking: In addition to the fine-grained human judgments, we collect judgments to assess the overall ranking of the systems.",
"Given the original source style sentence, the target style reference rewrite and the target style model outputs, we ask workers to rank the rewrites in the order of their overall formality, taking into account both fluency and meaning preservation.",
"We then rank the model using the equation below: rank ( model ) = 1 | S | X s S 1 | J | X j J rank ( s model , j ) (1) where, model is the one of our models, S is a subset of 500 test set sentences, J is the set of five judgments, s model is the model rewrite for sentence s , and rank ( s model , j ) is the rank of s model in judgment j .",
"cases the annotations looked correct.",
"But as is common in any such crowdsourced data collection process, there were some errors, especially in the overall ranking of the systems.",
"Formality: We use the formality classifier described in PT16.",
"We find that the classifier trained on the answers genre of PT16 dataset does not perform well when tested on our datasets.",
"Hence, we collect formality judgments for an additional 5000 sentences and use the formality classifier re-trained on this in-domain data.",
"Fluency: We use the reimplementation 13 of Heilman et al. (2014) (H14 in Table 4) which is a statistical model for predicting the grammaticality of a sentence on a scale of 0 to 4 previously shown to be effective for other generation tasks like grammatical error correction (Napoles et al., 2016).",
"Meaning Preservation: Modeling semantic similarity at a sentence level is a fundamental language processing task, and one that is a wide open field of research.",
"Recently, He et al., (2015) (HE 15 in Table 4) developed a convolutional neural network based sentence similarity measure.",
"We use their off-the-shelf implementation 14 to train a model on the STS and use it to measure the meaning similarity between the original source style sentence and its target style rewrite (both reference and model outputs).",
"Overall Ranking: We experiment with BLEU (Papineni et al., 2002) and PINC (Chen and Dolan, 2011) as both were used in prior style evaluations, as well as TERp (Snover et al., 2009).",
"In this section, we discuss how well the five models perform in the informal to formal style transfer task using human judgments ( 6.1) and automatic metrics ( 6.2), the correlation of the automatic metrics and human judgments to determine the ef-13",
"ficacy of the metrics ( 6.3) and present a manual analysis ( 6.4).",
"We randomly select 500 sentences from each test set and run all five models.",
"We use the entire train and tune split for training and tuning.",
"We discuss results only on the E&M domain and list results on the F&R domain in the supplementary material.",
"Table 4 shows the results for human 6.1 and automatic 6.2 evaluation of model rewrites.",
"For all metrics except TERp , a higher score is better.",
"For each of the automatic metrics, we evaluate against four human references.",
"The row Original Informal' contains the scores when the original informal sentence is compared with the four formal reference rewrites.",
"Comparing the model scores to this score helps us understand how closer are the model outputs to the formal reference rewrites compared to initial distance between the informal and the formal reference rewrite.",
"The columns marked Human' in Table 4 show the human judgments for the models on the three separate criteria of formality , fluency and meaning collected using the process described in Section 5.1.",
"15 The NMT Baseline and Copy models beat others on the formality axis by a significant margin.",
"Only the NMT Combined model achieves a statistically higher fluency score when compared to the rule-based baseline model.",
"As expected, the rule-based model is the most meaning preserving since it is the most conservative.",
"Figure 3 shows the trend in the four leading models along formality and meaning for varying lengths of the source sentence.",
"NMT Combined beats PBMT on formality for shorter lengths whereas the trend reverses as the length increases.",
"PBMT generally preserves meaning more than the NMT Combined.",
"We find that the fluency scores for all models decreases as the sentence length increases which is similar to the trend generally observed with machine translation based approaches.",
"Since a good style transfer model is the one that attains a balanced score across all the three axes, we evaluate the models on a combination of these metrics 16 shown under the column Combined' in Table 4.",
"NMT Combined is the only model having a combined score statistically greater than the rule-based approach.",
"15 Out of the four reference rewrites, we pick one at random to show to Turkers.",
"16 We recalibrate the scores to normalize for different ranges.",
"Finally, Table 5 shows the overall rankings of the models from best to worst in both domains.",
"PBMT and NMT Combined models beat the rule-based model although not significantly in the E&M domain but significantly in the F&R domain.",
"Interestingly, the rule-based approach attains third place with a score significantly higher than NMT Copy and NMT Baseline models.",
"It is important to note here that while such a rule-based approach is relatively easy to craft for the formality style transfer task, the same may not be true for other styles like politeness or persuasiveness.",
"Under automatic metrics, the formality and meaning scores align with the human judgments with the NMT Baseline and NMT Copy winning on formality and rule-based winning on meaning.",
"The fluency score of the NMT Baseline is the highest in contrast to human judgments where the NMT Combined wins.",
"This discrepancy could be due to H14 being trained on essays which contains sentences of a more formal genre compared to Yahoo Answers.",
"In fact, the fluency classifier scores the formal reference quite low as well.",
"Under overall metrics, PBMT and NMT Combined models beat other models as per BLEU (significantly) and TERp (not significantly).",
"NMT Baseline and NMT copy win over other models as per PINC which can be explained by the fact that PINC measures lexical dissimilarity with the source and NMT models tend towards making more changes.",
"Although such an analysis is useful, for a more thorough understanding of these metrics, we next look at their correlation with human judgments.",
"We report the spearman rank correlation co-efficient between automatic metrics and human judgments in Table",
"6. For formality , fluency and meaning , the correlation is with their respective human judgments whereas for BLEU, TERp and PINC, the correlation is with the overall ranking.",
"We see that the formality and the fluency metrics correlate moderately well while the meaning metric correlates comparatively poorly.",
"To be fair, the HE 15 classifier was trained on the STS dataset which contains more formal writing than informal.",
"BLEU correlates moderately well (better than what XU 12 observed for the Shakespeare task) whereas the correlation drops for TERp.",
"PINC, on the other hand, correlates very poorly with a positive correlation with rank when it should have a negative correlation with rank, just like BLEU.",
"This sheds light on the fact that PINC, on its own, is not a good metric for style transfer since it prefers lexical edits at the cost of meaning changes.",
"In the Shakespeare task, XU 12 did observe a higher correlation with PINC (0.41) although the correlation was not with overall system ranking but rather only on the style metric.",
"Moreover, in the Shakespeare task, changing the text is more favorable than in formality.",
"The prior evaluations reveal the relative performance differences between approaches.",
"Here, we identify trends per and between approaches.",
"We sample 50 informal sentences total from both domains and then analyze the outputs from each model.",
"We present sample sentences in Table",
"7. The NMT Baseline and NMT Copy tend to have the most variance in their performance.",
"This is likely due to the fact that they are trained on only 50K sentence pairs, whereas the other models are trained on much more data.",
"For shorter sentences, these models make some nice formal transformations like from very dumb ' to very foolish '.",
"However, for longer sentences, these models make drastic meaning changes and drop some content altogether (see examples in Table 7).",
"On the 136 Entertainment & Music Original Informal Wow , I am very dumb in my observation skills",
"They make changes more conservatively but when they do, they are usually correct.",
"Thus, most of the outputs from these two models are usually meaning preserving but at the expense of a lower formality score improvement.",
"In most examples, all models are good at removing very informal words like stupid ', idiot ' and hell ', with PBMT and NMT Combined models doing slightly better.",
"All models struggle when the original sentence is very informal or disfluent.",
"They all also struggle with sentence completions that humans seem to be very good at.",
"This might be because humans assume a context when absent, whereas the models do not.",
"Unknown tokens, either real words or misspelled words, tend to wreak havoc on all approaches.",
"In most cases, the models simply did not transform that section of the sentence, or remove the unknown tokens.",
"Most models are effective at low-level changes such as writing out numbers, inserting commas, and removing common informal phrases.",
"The goal of this paper was to move the field of style transfer forward by creating a large training and evaluation corpus to be made public, showing that adapting MT techniques to this task can serve as strong baselines for future work, and analyzing the usefulness of existing metrics for overall style transfer as well as three specific criteria of automatic style transfer evaluation.",
"We view this work as rigorously expanding on the foundation set by XU 12 five years earlier.",
"It is our hope that with a common test set, the field can finally benchmark approaches which do not require parallel data.",
"We found that while the NMT systems perform well given automatic metrics, humans had a slight preference for the PBMT approach.",
"That being said, two of the neural approaches (NMT Baseline and Copy) often made successful changes and larger rewrites that the other models could not.",
"However, this often came at the expense of a meaning change.",
"We also introduced new metrics and vetted all metrics using comparison with human judgments.",
"We found that previously-used metrics did not correlate well with human judgments, and thus should be avoided in system development or final evaluation.",
"The formality and fluency metrics correlated best and we believe that some combination of these metrics with others would be the best next step in the development of style transfer metrics.",
"Such a metric could then in turn be used to optimize MT models.",
"Finally, in this work we focused on one particular style, formality.",
"The long term goal is to generalize the methods and metrics to any style.",
"The authors would like to thank Yahoo Research for making their data available.",
"The authors would also like to thank Junchao Zheng and Claudia Leacock for their help in the data creation process, Courtney Napoles for providing the fluency scores, Marcin Junczys-Dowmunt, Rico Sennrich, Ellie Pavlick, Maksym Bezva, Dimitrios Alikan-iotis and Kyunghyun Cho for helpful discussion and the three anonymous reviewers for their useful comments and suggestions."
] | [
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Existing end-to-end dialog systems perform less effectively when data is scarce.",
"To obtain an acceptable success in real-life online services with only a handful of training examples, both fast adaptability and reliable performance are highly desirable for dialog systems.",
"In this paper, we propose the Meta-Dialog System (MDS), which combines the advantages of both meta-learning approaches and human-machine collaboration.",
"We evaluate our methods on a new extended-bAbI dataset and a transformed MultiWOZ dataset for low-resource goal-oriented dialog learning.",
"Experimental results show that MDS significantly outperforms non-meta-learning baselines and can achieve more than 90% per-turn accuracies with only 10 dialogs on the extended-bAbI dataset.",
"End-to-end neural models have shown a great potential in building flexible goal-oriented dialog systems.",
"They can be directly trained on past dialogs without any domain-specific handcrafting, which makes it easy to automatically scale up to new domains (Bordes et al., 2017).",
"However, these models are normally data-hungry and have only been successfully applied to domains with rich datasets (Perez et al., 2017; Luo et al., 2019; Kim et al., 2019).",
"In real-world scenarios, common issues with end-to-end dialog models include: (1) the shortage of proper training dialogs because of the high cost of data collection and cleaning, i.e., the data scarcity problem (Zhao and Eskenazi, 2018), and (2) a large gap between limited data and unknown online test examples, i.e., the covariate shift effect (Liu et al.).",
"Such problems can lead to a significant performance degradation in dialog systems, which Corresponding author may harm the users' experience and result in loss of customers in commercial applications.",
"Therefore, both fast adaptability and reliable performance are strongly desirable for practical system deployment.",
"Fast adaptability reflects the efficiency of adapting dialog systems to domains with low-resource data.",
"Reliable performance reflects the robustness of handling unpredictable user behaviors in online services.",
"To boost the online performance of dialog systems, there have been some recent work (Rajendran et al., 2019; Wang et al., 2019; Lu et al., 2019) on designing end-to-end models in a human-machine joint-teaming manner.",
"For instance, the dialog system in (Rajendran et al., 2019) can identify an ongoing dialog during testing when the system might fail and transfer it to a human agent.",
"But all these methods are trained with sufficient data, which hinders the possibility of rapidly prototyping the models in new domains with restricted resources.",
"In this paper, we formulate the low-resource goal-oriented dialog learning as a few-shot learning problem, where a limited numbers of dialogs are used for training and the remaining for the test.",
"We propose the Meta-Dialog System (MDS), an end-to-end human-machine teaming framework optimized by the model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017).",
"In general, MDS learns to make prediction and requests human by finding good initial parameters, which can be adapted to new tasks fast and reliably by using fewer dialogs.",
"We evaluate our methods on a new multi-domain dialog dataset called extended-bAbI .",
"Results show that MDS achieves obvious performance improvement over baselines and attains more than 90% per-turn accuracy on new domains with only 10 dialogs.",
"We also perform experiments on MultiWOZ dataset (Eric et al., 2019) which has been transformed into simplified bAbI format and observe similar superior results with MDS.",
"In summary, the main contributions of this paper are three-fold: (1) To the best of our knowledge, this is the first study on applying meta-learning to retrieval-based end-to-end goal-oriented dialog systems; (2) we leverage the MAML algorithm to optimize a human-machine collaborative dialog system and show very promising results on the low-resource dialog tasks; and (3) we propose a new dataset and hope that can help bring forward the research in this area.",
"In this section, we first introduce the problem definition and our new dataset; we then elaborate the framework of MDS and meta-learning procedures.",
"Problem Definition.",
"We focus on the retrieval-based goal-oriented dialog tasks (Perez et al., 2017), where a training data d i usually contains a triple ( H i , y i , R ) .",
"H i denotes the dialog history consisting of all user utterances and system responses up to the current turn, R is a set of given candidate responses and y i is the index of the correct response in R .",
"The main task is to train an end-to-end dialog model to predict y i from R based on H i .",
"Extended-bAbI Dataset.",
"The original bAbI dataset (Bordes et al., 2017) is not suitable for low-resource settings due to the lack of domains and tasks.",
"We extend it into a multi-domain dataset through complicated simulation rules and construct templates with a more diversity to raise the diffi-culty.",
"There are 7 domains in total: restaurant, flights, hotels, movies, music, tourism and weather , each of which has its own ontology and the candidate response set.",
"Similar to (Bordes et al., 2017), a complete dialog in extended-bAbI contains four phases of interactions: (1) the system asks for required attributes to constrain the search and issues the first API call; (2) the user updates their requests for revised API calls; (3) the system con-firms for multiple times to determine the entity the user wants; (4) the user requests more attributes for extra information based on the final entity.",
"The total number of dialogs is 21,000 and the detailed examples and statistics are given in Appendix A.1.",
"In MDS, there is an encoding module to extract neural features of dialogs and a policy module to make system actions of either predicting responses or requesting human.",
"All modules are jointly optimized with the MAML algorithm.",
"The main framework of training MDS is shown in Figure 1.",
"Encoding Module.",
"It contains a history encoder to compute the dialog state vector s i for H i and a response encoder to compute the response embedding r j for the j -th response in R .",
"The dimensions of s i and r j are set as the same.",
"In this paper, we use the MemN2N (Sukhbaatar et al., 2015) as the history encoder and a simple additive model for the response encoder, but many other models optimized by gradient descent may be applied here.",
"Policy Module.",
"This module consists of a switch S that makes a binary decision whether to request human to select the response, and a response predictor P that predicts the right response itself if human is not requested.",
"We assume that the response chosen by human is always correct.",
"For the optimization of P , the widely used large-margin cosine loss (Wang et al., 2018; Lin and Xu, 2019) is employed since it maximizes the decision margin in the angular space and is able to force the model to learn more discriminative deep features.",
"Suppose a batch of training data is D = { d 1 , ...d i , ..., d |D| } , then the formulation is: LLMC = |D| (cid:88) i =1 log e a (cos( s i ,r yi ) b ) e a (cos( s i ,r yi ) b ) + (cid:80) j (cid:54) = y i e a cos( s i ,r j ) (1) where cos( , ) is a function that calculates the cosine similarity of two input vectors.",
"a is the scaling factor and b is the cosine margin ( a = 30 , m = 0 . 1 in our experiments).",
"In the test phase, the model predicts an answer according to the maximal cosine angle y i = argmax j cos( s i , r j ) .",
"The switch S is a neural binary classifier that also takes s i and each r j as input and calculate the decision probability of requesting human as follows: w ij = e s T i Wr j / (cid:88) |R| k =1 e s T i Wr k (2) c i = (cid:88) |R| j =1 w ij r j (3) f i = s i c i (4) p i = ( FC ( f i )) (5) where is the sigmoid function and the concatenation function for vectors.",
"FC( ) is a fully-connected neural network with one hidden layer that has half size of the input layer and is activated Figure 1: An overview of training the Meta-Dialog System.",
"by tanh function.",
"|R| is the size of R and W is a trainable square matrix.",
"Learning to switch.",
"Since there are no actual labels for S to indicate whether it is correct to ask human or not, some previous work (Woodward and Finn, 2016; Rajendran et al., 2019) proposes to use the REINFORCE algorithm (Williams, 1992) for weakly-supervised training, but their reward settings fail to penalize the case when the model asks human while it can give right prediction, which may lead to redundant requests.",
"To consider this effect, we propose a new reward definition here.",
"For the batch data D , we calculate the F1 scores 1 for positive data and negative data, respectively, and take the average of them to get a scalar value score ( D ) .",
"Then each data d i D is assigned with a reward by computing an incremental value as below: r t = score ( D ) score ( D d i ) (6) Through maximizing such rewards, the switch S learns to be more effective and asks human when it is necessary.",
"The reinforcement learning loss for S is LRL = (cid:80) |D| i =1 r i log p i , and the final loss of our model is L = LLMC + LRL .",
"We rewrite the final loss L as L ( M , D ) for clarity, where M denotes the dialog model with trainable parameters and D is the batch data for training.",
"During meta-learning, we first choose one domain as the target domain and the rest as source domains.",
"Then we uniformly sample K different domains T = { 1 , . . . , K } from source domains as meta-tasks.",
"For each meta-task k , we sample N data as the support set D sup k and other N data with the same answers as the query set D que k .",
"1 Detailed explanations can be found in Appendix A.2.",
"Algorithm 1 Meta-learning for MDS Input: The learning rates , Output: optimal meta-learned model 1: Initialize model parameters randomly 2: while not converged do 3: Sample T from source domains and prepare D sup k , D que k 4: for each k do 5: Evaluate L ( M , D sup k ) 6: Compute (cid:48) k = L ( M , D sup k ) 7: Evaluate L ( M (cid:48) k , D que k ) 8: end for 9: Update (cid:80) Kk =1 L ( M (cid:48) k , D que k ) 10: end while M is first updated on support sets for each k : (cid:48) k = L ( M , D sup k ) (7) Then M is evaluated on each D que k with (cid:48) k respectively and is optimized as follows: (cid:88) K k =1 L ( M (cid:48) k , D que k ) (8) where , are learning rates.",
"By training on multiple tasks via MAML, M can learn good initial parameters that is applicable on new tasks or domains (Finn et al., 2017; Mi et al., 2019).",
"The algorithm is summarised in Algorithm 1.",
"After this meta-learning as pre-training, we fine-tune M on the target domain with the first L dialogs of its training set, where L is a small number.",
"To mimic the situation of online testing, we evaluate M on the whole test sets and regard those unseen user utterances as new user behaviours.",
"In our experiments, we first verify the capability of MDS on our newly simulated dialog dataset",
"extended-bAbI , and then conduct extra evaluation on the more realistic dataset MultiWOZ 2.1 (Eric et al., 2019).",
"We select each domain as the target domain in turn and take the average of the results in all domains.",
"Metric.",
"Following (Wang et al., 2019), we report the user-perceived per-turn accuracy (per-turn accuracy' is used in the remainder of the paper), where the prediction of one turn is considered correct if the model either selects the right response by itself or asks human.",
"To be fair, we also report the human request rate.",
"The less the request rate and higher per-turn accuracy are, the more reliable the model performs online.",
"Implementation details.",
"For the meta-learning, we use SGD for the inner loop and Adam for the outer loop with learning rate = 0 .",
"01 and = 0 .",
"001 .",
"The meta-task size K is 4 and the support or query set size N is 16.",
"For the standard MLE training, we use Adam with a learning rate of 0.001 and set the batch size as 32.",
"Both schemes are trained for a maximum of 5000 iterations with early stopping on the validation set.",
"During fine-tuning on new domains, we use SGD with the learning rate 0.01 for all models and report the final results after fine-tuning 10 iterations on L training dialogs of the target domain, where L = 0 , 1 , 5 , 10 .",
"The word vector size is 25 and all MemN2Ns take 3 hops.",
"We compare MDS with the following baselines: Mem : A MemN2N (Sukhbaatar et al., 2015) model trained with standard MLE.",
"MetaMem : A MemN2N trained with MAML.",
"Both Mem and MetaMem can not request human.",
"Mem+C : A MemN2N model combined with a binary classifier in (Rajendran et al., 2019), which has different objective functions and optimization.",
"IDS : The incremental dialog system used in (Wang et al., 2019), which requests human by estimating the uncertainty through a variational autoencoder.",
"MDS -switch : A MDS without the switch S .",
"MDS rand : A MDS whose switch is replaced with a random classifier that has the same request rate.",
"MDS mle : A MDS whose meta-learning optimization is replaced with standard MLE.",
"Table 1 shows few-shot adaptation results for different methods.",
"MDS significantly outperforms other models under all adaptation sizes of new dialogs and can achieve a 91.31% per-turn accuracy on average with only 10 new dialogs.",
"There is a gap between methods without the switch (such as Mem, MetaMem and MDS -switch ) and methods with the switch in Table 1, indicating that the switch S is crucial for improving the overall per-turn accuracy because of the human agent.",
"However, without proper objective functions and meta-learning optimization, Mem+C and IDS 2 have poorer performances in both metrics than MDS even if they contain the switch module.",
"In the ablation study, we see a steady increase of about 10% per-turn accuracy from the comparison between MDS and MDS rand , suggesting that the switch does identify intractable dialogs.",
"MDS mle is the closest baseline to MDS, but we still observe an obvious improvement, which means joint optimization of S and P via meta-learning allows faster and better adaptation while maintaining similar request rates.",
"Appendix A.3 illustrates detailed case studies for different methods.",
"To further investigate the adaptation process, we present the fine-tuning curves for different methods with 1 dialog adaptation in Figure 2.",
"As it can be seen, MDS achieves the best accuracy at the beginning and converges fastest as well, showing that it can transfer on new tasks quickly by finding better parameter initialization.",
"MultiWOZ (Budzianowski et al., 2018) is a widely-used multi-domain Wizard-of-Oz dialog dataset spanning 7 distinct domains and containing 10k dialogs.",
"This realistic dataset has been a standard benchmark for various dialog tasks such as belief tracking and policy optimization.",
"In our experiment, we use the corrected version MultiWOZ 2.1 (Eric et al., 2019) for evaluation.",
"To translate the MultiWOZ dialogs into bAbI-format data, we first delexicalize the slot-values in user utterances using dialog labels, and then produce a set of canonical system acts as the candidate responses by simplifying the original dialog acts.",
"Only dialogs containing single domain are used in our experiments and a MultiWOZ dialog sample is given in Appendix A.4.",
"Table 2 shows the adaptation results for different models on MultiWOZ 2.1.",
"It can be seen that MDS still largely outperforms other models with the adaptation of 10 dialogs.",
"The degradation of per-turn accuracy from extended-bAbI to MultiWOZ is reasonable since the user utterance is more diverse and the dialog policy is more flexible.",
"End-to-end neural approaches of building dialog systems have attracted increasing research interest.",
"The work of (Bordes et al., 2017) is the first attempt to solve goal-oriented dialog tasks with end-to-end models.",
"Further improvements has been made in (Williams et al., 2017) to combine explicit domain-specific knowledge and implicit RNN features.",
"Luo et al. (2019) take user personalities into consideration for better user satisfaction.",
"Rajendran et al. (2018) learn dialogs with multiple possible answers.",
"Our work is inspired by the work of (Rajendran et al., 2019; Wang et al., 2019), which Method Adapt with 10 dialogs accuracy request Mem 56.87 1.63 n.a. MetaMem 62.78 2.05 n.a. Mem+C 80.59 3.13 38.18 5.01 MDS -switch 64.50 3.75 n.a. MDS rand 74.78 4.35 38.34 MDS mle 80.92 3.02 37.91 4.20 MDS 83.52 3.30 38.34 6.96 Table 2: Few-shot test results on MultiWOZ 2.1.",
"propose to solve unseen user behaviors through human-machine teamwork.",
"The research of (Liu et al.; Chen et al., 2017; Lu et al., 2019) also show the advantages of incorporating the role of human to teach online.",
"However, dialog learning in low-resource scenarios has not been investigated.",
"Meta-learning aims to learn new tasks rapidly with a few training examples (Sung et al., 2018; Finn et al., 2017), which fits well to our task.",
"There have been some work applying meta-learning to other tasks in dialog research, such as that in (Dou et al., 2019; Geng et al., 2019) for natural language understanding and (Qian and Yu, 2019; Mi et al., 2019) for natural language generation.",
"In this paper, we leverage the MAML algorithm to optimize a human-machine collaborative dialog system, which shows good results for both fast adaptability and reliable performance.",
"In the future, we plan to use more powerful encoders and evaluate our methods on real dialog data.",
"The research of the last author is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC)."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"result",
"method",
"other"
] |
[
"In order to alleviate data sparsity and overfitting problems in maximum likelihood estimation (MLE) for sequence prediction tasks, we propose the Generative Bridging Network (GBN), in which a novel bridge module is introduced to assist the training of the sequence prediction model (the generator network).",
"Unlike MLE directly maximizing the conditional likelihood, the bridge extends the point-wise ground truth to a bridge distribution conditioned on it, and the generator is optimized to minimize their KL-divergence.",
"Three different GBNs, namely uniform GBN, language-model GBN and coaching GBN, are proposed to penalize confidence, enhance language smoothness and relieve learning burden.",
"Experiments conducted on two recognized sequence prediction tasks (machine translation and abstractive text summarization) show that our proposed GBNs can yield significant improvements over strong baselines.",
"Furthermore, by analyzing samples drawn from different bridges, expected influences on the generator are verified.",
"Sequence prediction has been widely used in tasks where the outputs are sequentially structured and mutually dependent.",
"Recently, massive explorations in this area have been made to solve practical problems, such as machine translation (Bah-danau et al., 2014; Ma et al., 2017; Norouzi et al., 2016), syntactic parsing (Vinyals et al., 2015), spelling correction (Bahdanau et al., 2014), image captioning (Xu et al., 2015) and speech recognition (Chorowski et al., 2015).",
"Armed with mod-ern computation power, deep LSTM (Hochreiter and Schmidhuber, 1997) or GRU (Chung et al., 2014) based neural sequence prediction models have achieved the state-of-the-art performance.",
"The typical training algorithm for sequence prediction is Maximum Likelihood Estimation 1 2 1 2 1 1 1 2 (| ) 1 2 (|) Figure 1: The overall architecture of our novel Generative Bridging Network (GBN).",
"(MLE), which maximizes the likelihood of the target sequences conditioned on the source ones: = argmax E ( X,Y ) D log p ( Y | X ) (1) Despite the popularity of MLE or teacher forcing (Doya, 1992) in neural sequence prediction tasks, two general issues are always haunting: 1).",
"data sparsity and 2).",
"tendency for overfitting, with which can both harm model generalization.",
"To combat data sparsity, different strategies have been proposed.",
"Most of them try to take advantage of monolingual data (Sennrich et al., 2015; Zhang and Zong, 2016; Cheng et al., 2016).",
"Others try to modify the ground truth target based on derived rules to get more similar examples for training (Norouzi et al., 2016; Ma et al., 2017).",
"To alleviate overfitting, regularization techniques, 1706 such as confidence penalization (Pereyra et al., 2017) and posterior regularization (Zhang et al., 2017), are proposed recently.",
"As shown in Figure 1, we propose a novel learning architecture, titled Generative Bridging Network (GBN), to combine both of the benefits from synthetic data and regularization.",
"Within the architecture, the bridge module (bridge) first transforms the point-wise ground truth into a bridge distribution, which can be viewed as a target proposer from whom more target examples are drawn to train the generator.",
"By introducing different constraints, the bridge can be set or trained to possess specific property, with which the drawn samples can augment target-side data (alleviate data sparsity) while regularizing the training (avoid overfitting) of the generator network (generator).",
"In this paper, we introduce three different constraints to build three bridge modules.",
"Together with the generator network, three GBN systems are constructed: 1).",
"a uniform GBN, instantiating the constraint as a uniform distribution to penalize confidence; 2).",
"a language-model GBN, instantiating the constraint as a pre-trained neural language model to increase language smoothness; 3).",
"a coaching GBN, instantiating the constraint as the generator's output distribution to seek a close-to-generator distribution, which enables the bridge to draw easy-to-learn samples for the generator to learn.",
"Without any constraint, our GBN degrades to MLE.",
"The uniform GBN is proved to minimize KL-divergence with a so-called payoff distribution as in reward augmented maximum likelihood or RAML (Norouzi et al., 2016).",
"Experiments are conducted on two sequence prediction tasks, namely machine translation and abstractive text summarization.",
"On both of them, our proposed GBNs can significantly improve task performance, compared with strong baselines.",
"Among them, the coaching GBN achieves the best.",
"Samples from these three different bridges are demonstrated to confirm the expected impacts they have on the training of the generator.",
"In summary, our contributions are: A novel GBN architecture is proposed for sequence prediction to alleviate the data sparsity and overfitting problems, where the bridge module and the generator network are integrated and jointly trained.",
"Different constraints are introduced to build GBN variants: uniform GBN, language-model GBN and coaching GBN.",
"Our GBN architecture is proved to be a generalized form of both MLE and RAML.",
"All proposed GBN variants outperform the MLE baselines on machine translation and abstractive text summarization.",
"Similar relative improvements are achieved compared to recent state-of-the-art methods in the translation task.",
"We also demonstrate the advantage of our GBNs qualitatively by comparing ground truth and samples from bridges.",
"We are willing to design an architecture which can integrate both of their benefits.",
"The basic idea is to use a so-called bridge which transforms Y to an easy-to-sample distribution, and then use this distribution (samples) to train and meanwhile regularize the sequence prediction model (the generator).",
"The bridge is viewed as a conditional distribution 1 p ( Y | Y ) to get more target Y s given Y so as to construct more training pairs ( X, Y ) .",
"In the meantime, we could inject (empirical) prior knowledge into the bridge through its optimization objective which is inspired by the design of the payoff distribution in RAML.",
"We formulate the optimization objective with two parts in Equation (2):",
"a) an expected similarity score computed through a similarity score function S ( , Y ) interpolated with",
"b) a knowledge injection constraint 2 C ( p ( Y | Y ) , p c ( Y )) where controls the 1 should be treated as an index of the bridge distribution, so it is not necessarily the parameters to be learned.",
"strength of the regularization, formally, we write the objective function LB ( ) as follows:",
"LB ( ) = EY p ( Y | Y ) [ \u0000 S ( Y, Y )] + C ( p ( Y | Y ) , p c ( Y )) (2)",
"Minimizing it empowers the bridge distribution not only to concentrate its mass around the ground truth Y but also to adopt certain hope property from p c ( Y ) .",
"With the constructed bridge distribution, we optimize the generator network P ( Y | X ) to match its output distribution towards the bridge distribution by minimizing their KL-divergence: LG ( ) = KL ( p ( Y | Y ) || p ( Y | X )) (3) In practice, the KL-divergence is approximated through sampling process detailed in Sec. 2.3.",
"As a matter of fact, the bridge is the crux of the integration: it synthesizes new targets to alleviate data sparsity and then uses the synthetic data as regularization to overcome overfitting.",
"Thus a regularization-by-synthetic-example approach, which is very similar to the prior-incorporation-by-virtual-example method (Niyogi et al., 1998).",
"Our generator network is parameterized with the commonly used encoder-decoder architecture (Bahdanau et al., 2014; Cho et al., 2014).",
"The encoder is used to encode the input sequence X to a sequence of hidden states, based on which an attention mechanism is leveraged to compute context vectors at the decoding stage.",
"The context vector together with previous decoder's hidden state and previously predicted label are used, at each time step, to compute the next hidden state and predict an output label.",
"As claimed in Equation (3), the generator network is not trained to maximize the likelihood of the ground truth but tries best to match the bridge distribution, which is a delegate of the ground truth.",
"We use gradient descent to optimize the KL-divergence with respect to the generator: r LG ( ) = EY p ( Y | Y ) log r p ( Y | X ) (4) The optimization process can be viewed as the generator maximizing the likelihood of samples tribution p c , however, we believe mathematical form of C is not restricted, which could motivate further development.",
"drawn from the bridge.",
"This may alleviate data sparsity and overfitting by posing more unseen scenarios to the generator and may help the generator generalize better in test time.",
"Our bridge module is designed to transform a single target example Y to a bridge distribution p ( Y | Y ) .",
"We design its optimization target in Equation (2) to consist of two terms, namely, a concentration requirement and a constraint.",
"The constraint is instantiated as KL-divergence between the bridge and a contraint distribution p c ( Y ) .",
"We transform Equation (2) as follows, which is convenient for mathematical manipulation later: LB ( ) = EY p [ \u0000 S ( Y, Y ) ] + KL ( p ( Y | Y ) || p c ( Y )) (5) S ( Y, Y ) is a predefined score function which measures similarity between Y and Y and peaks when Y = Y , while p c ( Y ) reshapes the bridge distribution.",
"More specifically, the first term ensures that the bridge should concentrate around the ground truth Y , and the second introduces willing property which can help regularize the generator.",
"The hyperparameter can be interpreted as a temperature which scales the score function.",
"In the following bridge specifications, the score function S ( Y, Y ) is instantiated according to Sec. 3.1.",
"1. Delta Bridge The delta bridge can be seen as the simplest case where = 0 or no constraint is imposed.",
"The bridge seeks to minimize EY p ( Y | Y ) [ \u0000 S ( Y,Y ) ] .",
"The optimal solution is when the bridge only samples Y , thus the Dirac delta distribution is described as follows: p ( Y | Y ) = \u0000 Y ( Y ) (6) This exactly corresponds to MLE, where only examples in the dataset are used to train the generator.",
"bridge motivates to include noise into target example, which is similar to label smoothing (Szegedy et al., 2016).",
"The loss function can be written as: LB ( ) = EY p [ \u0000 S ( Y, Y ) ] + KL ( p ( Y | Y ) || U ( Y )) (7) We can re-write it as follows by adding a constant to not change the optimization result: LB ( ) + C = KL ( p ( Y | Y ) || exp S ( Y,Y ) Z ) (8) This bridge is static for having a closed-form solution: p ( Y | Y ) = exp S ( Y,Y ) Z (9) where Z is the partition function.",
"Note that our uniform bridge corresponds to the payoff distribution described in RAML (Norouzi et al., 2016).",
"LB ( ) = EY p ( Y | Y ) [ \u0000 S ( Y, Y ) ] + KL ( p ( Y | Y ) || p LM ) (10) Similar to the uniform bridge case, we can re-write the loss function to a KL-divergence: LB ( ) + C = KL ( p ( Y | Y ) || p LM ( Y ) exp S ( Y,Y ) Z ) (11) Thus, the LM bridge is also static and can be seen as an extension of the uniform bridge, where the exponentiated similarity score is re-weighted by a pretrained LM score, and renormalized: p ( Y | Y ) = p LM ( Y ) exp S ( Y,Y ) Z (12) where Z is the partition function.",
"3. Language-model (LM) Bridge The LM bridge utilizes a pretrained neural language model p LM ( Y ) as constraint, which motivates to propose target examples conforming to language fluency.",
"The above equation looks just like the payoff distribution, whereas an additional factor is considered.",
"4. Coaching Bridge The coaching bridge utilizes the generator's output distribution as constraint, which motivates to generate training samples which are easy to be understood by the generator, so as to relieve its learning burden.",
"The coaching bridge follows the same spirit as the coach proposed in Imitation-via-Coaching (He et al., 2012), which, in reinforcement learning vocabulary, advocates to guide the policy (genera-tor) with easy-to-learn action trajectories and let it gradually approach the oracle when the optimal action is hard to achieve.",
"Since the KL constraint is a moving target when the generator is updated, the coaching bridge should not remain static.",
"Therefore, we perform iterative optimization to train the bridge and the generator jointly.",
"Formally, the derivatives for the coaching bridge are written as follows: r LB ( ) = EY p [ \u0000 S ( Y, Y ) r log p ( Y | Y )] + EY p r log p ( Y | Y ) (14) The first term corresponds to the policy gradient algorithm described in REINFORCE (Williams, 1992), where the coefficient \u0000 S ( Y, Y ) / corresponds to reward function.",
"Due to the mutual dependence between bridge module and generator network, we design an iterative training strategy, i.e. the two networks take turns to update their own parameters treating the other as fixed.",
"The training of the above three variants is illustrated in Figure 3. Since the proposed bridges can be divided into static ones, which only require pretraining, and dynamic ones, which require continual training with the generator, we describe their training process in details respectively.",
"Since closed-formed optimal distributions can be found for uniform/LM GBNs, we only need to draw samples from the static bridge distributions to train our sequence generator.",
"Unfortunately, 1709 Generator (|) ( ) LM Bridge (| ) Coach Bridge (| ) () Iterative Training Stratified-sampled Training Uniform Bridge (| ) () Pre-trained Figure 3: The training processes of the three different variants of our GBN architecture (Sec. 2.3).",
"due to the intractability of these bridge distributions, direct sampling is infeasible.",
"Therefore, we follow Norouzi et al. (2016); Ma et al. (2017) and adopt stratified sampling to approximate the direct sampling process.",
"Given a sentence Y , we first sample an edit distance m , and then randomly select m positions to replace the original tokens.",
"The difference between the uniform and the LM bridge lies in that the uniform bridge replaces labels by drawing substitutions from a uniform distribution, while LM bridge takes the history as condition and draws substitutions from its step-wise distribution.",
"Since the KL-constraint is a moving target for the coaching bridge, an iterative training strategy is designed to alternately update both the generator and the bridge (Algorithm 1).",
"We first pre-train both the generator and the bridge and then start to alternately update their parameters.",
"Figure 4 intuitively demonstrates the intertwined optimization effects over the coaching bridge and the generator.",
"We hypothesize that iterative training with easy-to-learn guidance could benefit gradient update, thus result in better local minimum.",
"We select machine translation and abstractive text summarization as benchmarks to verify our GBN framework.",
"In our experiments, instead of directly using BLEU or ROUGE as reward to guide the bridge network's policy search, we design a simple sur-",
"(15) N n represents the n-gram matching score between Y and Y .",
"In order to alleviate reward sparsity at sequence level, we further decompose the global reward S ( Y, Y ) as a series of local rewards at every time step.",
"Formally, we write the step-wise reward s ( y t | y 1: t \u0000 1 , Y ) as follows: s ( y t | y 1: t \u0000 1 , Y ) = 8>>>>>< >>>>>: 1 .",
"0; N ( y 1: t , y t \u0000 3: t ) N ( Y , y t \u0000 3: t ) 0 .",
"6; N ( y 1: t , y t \u0000 2: t ) N ( Y , y t \u0000 2: t ) 0 .",
"3; N ( y 1: t , y t \u0000 1: t ) N ( Y , y t \u0000 1: t ) 0 .",
"1; N ( y 1: t , y t ) N ( Y , y t ) 0 .",
"0; otherwise (16) where N ( Y, Y ) represents the occurrence of subsequence Y in whole sequence Y .",
"Specifically, if 1710 Algorithm 1 Training Coaching GBN procedure PRE-TRAINING Initialize p ( Y | X ) and p ( Y | Y ) with random weights and Pre-train p ( Y | X ) to predict Y given X Use pre-trained p ( Y | X ) to generate Y given X Pre-train p ( Y | Y ) to predict Y given Y end procedure procedure ITERATIVE-TRAINING while Not Converged do Receive a random example ( X, Y ) if Bridge-step then Draw samples Y from p ( Y | X ) Update bridge via Equation (14) else if Generator-step then Draw samples Y from p ( Y | Y ) Update generator via Equation (4) end if end while end procedure a certain sub-sequence y t \u0000 n +1: t from Y appears less times than in the reference Y , y t receives reward.",
"Formally, we rewrite the step-level gradient for each sampled Y as follows: \u0000 S ( Y, Y ) r log p ( Y | Y ) = X t \u0000 s ( y t | y 1: t \u0000 1 , Y ) r log p ( y t | y 1: t \u0000 1 , Y ) (17) 3.2 Machine Translation Dataset We follow Ranzato et al. (2015); Bah-danau et al. (2016) and select German-English machine translation track of the IWSLT 2014 evaluation campaign.",
"The corpus contains sentence-wise aligned subtitles of TED and TEDx talks.",
"We use Moses toolkit (Koehn et al., 2007) and remove sentences longer than 50 words as well as lower-casing.",
"The evaluation metric is BLEU (Papineni et al., 2002) computed via the multi-bleu.perl.",
"System Setting We use a unified GRU-based RNN (Chung et al., 2014) for both the generator and the coaching bridge.",
"In order to compare with existing papers, we use a similar system setting with 512 RNN hidden units and 256 as embedding size.",
"We use attentive encoder-decoder to build our system (Bahdanau et al., 2014).",
"During training, we apply ADADELTA (Zeiler, 2012) Methods Baseline Model MIXER 20.10 21.81 +1.71 BSO 24.03 26.36 +2.33 AC 27.56 28.53 +0.97 Softmax-Q 27.66 28.77 +1.11 Uniform GBN ( = 0 . 8 ) 29.10 29.80 +0.70 LM GBN ( = 0 . 8 ) 29.90 +0.80 Coaching GBN ( = 0 . 8 ) 29.98 +0.88 Coaching GBN ( = 1 . 2 ) 30.15 +1.05 Coaching GBN ( = 1 . 0 ) 30.18 +1.08 Table 1: Comparison with existing works on IWSLT-2014 German-English Machine Translation Task.",
"70 75 80 85 90 95 100 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 BLEU Epoch (Bridge) Coaching GBN Learning Curve 31.5 31.6 31.7 31.8 31.9 32 32.1 32.2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 BLEU Epoch (Generator) Figure 5: Coaching GBN's learning curve on IWSLT German-English Dev set.",
"with = 10 \u0000 6 and = 0 .",
"95 to optimize parameters of the generator and the coaching bridge.",
"During decoding, a beam size of 8 is used to approximate the full search space.",
"An important hyper-parameter for our experiments is the temperature .",
"For the uniform/LM bridge, we follow Norouzi et al. (2016) to adopt an optimal temperature = 0 .",
"8 .",
"And for the coaching bridge, we test hyper-parameters from 2{ 0 .",
"8 , 1 .",
"0 , 1 .",
"2 } .",
"Besides comparing with our fine-tuned baseline, other systems for comparison of relative BLEU improvement are: MIXER (Ranzato et al., 2015), BSO (Wiseman and Rush, 2016), AC (Bahdanau et al., 2016), Softmax-Q (Ma et al., 2017).",
"Results The experimental results are summarized in Table 1. We can observe that our fine-tuned MLE baseline (29.10) is already over-1711 Methods RG-1 RG-2 RG-L ABS 29.55 11.32 26.42 ABS+ 29.76 11.88 26.96 Luong-NMT 33.10 14.45 30.71 SAEASS 36.15 17.54 33.63 seq2seq+att 34.04 15.95 31.68 Uniform GBN ( = 0 . 8 ) 34.10 16.70 31.75 LM GBN ( = 0 . 8 ) 34.32 16.88 31.89 Coaching GBN ( = 0 . 8 ) 34.49 16.70 31.95 Coaching GBN ( = 1 . 2 ) 34.83 16.83 32.25 Coaching GBN ( = 1 . 0 ) 35.26 17.22 32.67 Table 2: Full length ROUGE F1 evaluation results on the English Gigaword test set used by (Rush et al., 2015).",
"competing other systems and our proposed GBN can yield a further improvement.",
"We also observe that LM GBN and coaching GBN have both achieved better performance than Uniform GBN, which confirms that better regularization effects are achieved, and the generators become more robust and generalize better.",
"We draw the learning curve of both the bridge and the generator in Figure 5 to demonstrate how they cooperate during training.",
"We can easily observe the interaction between them: as the generator makes progress, the coaching bridge also improves itself to propose harsher targets for the generator to learn.",
"Dataset We follow the previous works by Rush et al. (2015); Zhou et al. (2017) and use the same corpus from Annotated English Gigaword dataset (Napoles et al., 2012).",
"In order to be comparable, we use the same script 4 released by Rush et al. (2015) to pre-process and extract the training and validation sets.",
"For the test set, we use the English Gigaword, released by Rush et al. (2015), and evaluate our system through ROUGE (Lin, 2004).",
"Following previous works, we employ ROUGE-1, ROUGE-2, and ROUGE-L as the evaluation metrics in the reported experimental results.",
"System Setting We follow Zhou et al. (2017); Rush et al. (2015) to set input and output vocabularies to 119,504 and 68,883 respectively, and we also set the word embedding size to 300 and all GRU hidden state size to 512.",
"Then we adopt dropout (Srivastava et al., 2014) with probability p = 0 .",
"5 strategy in our output layer.",
"We use attention-based sequence-to-sequence model (Bahdanau et al., 2014; Cho et al., 2014) as our baseline and reproduce the results of the baseline reported in Zhou et al. (2017).",
"As stated, the attentive encoder-decode architecture can already outperform existing ABS/ABS+ systems (Rush et al., 2015).",
"In coaching GBN, due to the fact that the input of abstractive summarization X contains more information than the summary target Y , directly training the bridge p ( Y | Y ) to understand the generator p ( Y | X ) is infeasible.",
"Therefore, we re-design the coaching bridge to receive both source and target input X, Y and we enlarge its vocabulary size to 88,883 to encompass more information about the source side.",
"In Uniform/LM GBN experiments, we also fix the hyper-parameter = 0 .",
"8 as the optimal setting.",
"Results The experimental results are summarized in Table 2. We can observe a significant improvement via our GBN systems.",
"Similarly, the coaching GBN system achieves the strongest performance among all, which again reflects our assumption that more sophisticated regularization can benefit generator's training.",
"We draw the learning curve of the coaching GBN in Figure 6 to demonstrate how the bridge and the generator promote each other.",
"By introducing different constraints into the bridge module, the bridge distribution will propose different training samples for the generator to learn.",
"From Table 3, we can observe that most samples still reserve their original meaning.",
"The uniform bridge simply performs random replacement without considering any linguistic constraint.",
"The LM bridge strives to smooth reference sentence with high-frequent words.",
"And the coaching bridge simplifies difficult expressions to relieve generator's learning burden.",
"From our experimental results, the more rational and aggressive diversifica-tion from the coaching GBN clearly benefits generator the most and helps the generator generalize to more unseen scenarios.",
"In order to resolve the data sparsity problem in Neural Machine Translation (NMT), many works have been conducted to augment the dataset.",
"The most popular strategy is via self-learning, which incorporates the self-generated data directly into training.",
"Zhang and Zong (2016) and Sennrich et al. (2015) both use self-learning to leverage massive monolingual data for NMT training.",
"Our bridge can take advantage of the parallel training data only, instead of external monolingual ones to synthesize new training data.",
"Reward augmented maximum likelihood or RAML (Norouzi et al., 2016) proposes to integrate task-level reward into MLE training by using an exponentiated payoff distribution.",
"KL divergence between the payoff distribution and the generator's output distribution are minimized to achieve an optimal task-level reward.",
"Following this work, Ma et al. (2017) introduces softmax Q-Distribution to interpret RAML and reveals its relation with Bayesian decision theory.",
"These two works both alleviate data sparsity problem by augmenting target examples based on the ground truth.",
"Our method draws inspiration from them but seeks to propose the more general Generative Bridging Network, which can transform the ground truth into different bridge distributions, from where samples are drawn will account for different interpretable factors.",
"Our coaching GBN system is inspired by imitation learning by coaching (He et al., 2012).",
"Instead of directly behavior cloning the oracle, they advocate learning hope actions as targets from a coach which is interpolated between learner's policy and the environment loss.",
"As the learner makes progress, the targets provided by the coach will become harsher to gradually improve the learner.",
"Similarly, our proposed coaching GBN is motivated to construct an easy-to-learn bridge distribution which lies in between the ground truth and the generator.",
"Our experimental results confirm its effectiveness to relieve the learning burden.",
"In this paper, we present the Generative Bridging Network (GBN) to overcome data sparsity and overfitting issues with Maximum Likelihood Estimation in neural sequence prediction.",
"Our implemented systems prove to significantly improve the performance, compared with strong baselines.",
"We believe the concept of bridge distribution can be applicable to a wide range of distribution matching tasks in probabilistic learning.",
"In the future, we intend to explore more about GBN's applications as well as its provable computational and statistical guarantees."
] | [
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"objective"
] |
[
"Abstract Transformer based architectures are recently used for the task of answering questions over tables.",
"In order to improve the accuracy on this task, specialized pre-training techniques have been developed and applied on millions of open-domain web tables.",
"In this paper, we propose two novel approaches demonstrating that one can achieve superior performance on table QA task without even using any of these specialized pre-training techniques.",
"The first model, called RCI interaction , leverages a transformer based architecture that independently classifies rows and columns to identify relevant cells.",
"While this model yields extremely high accuracy at finding cell values on recent benchmarks, a second model we propose, called RCI representation , provides a significant efficiency advantage for online QA systems over tables by materializing embeddings for existing tables.",
"Experiments on recent benchmarks prove that the proposed methods can effectively locate cell values on tables (up to 98% Hit@1 accuracy on WikiSQL lookup questions).",
"Also, the interaction model outperforms the state-of-the-art transformer based approaches, pre-trained on very large table corpora (TAPAS and TABERT ), achieving 3.4% and 18.86% additional precision improvement on the standard WikiSQL benchmark 1 .",
"Tabular data format is a commonly used layout in domain specific enterprise documents as well as open domain webpages to store structured information in a compact form (Pasupat and Liang, 2015; Canim et al., 2019).",
"In order to make use of these resources, many techniques have been proposed for the retrieval of tables (Cafarella et al., 2008; Zhang and Balog, 2018; Venetis et al., 2011; Shraga et al., 2020; Sun et al., 2016).",
"Given a large corpus of 1 The source code and the models we built are available at https://github.com/IBM/row-column-intersection.",
"documents, the goal in these studies is to retrieve top-k relevant tables based on given keyword(s).",
"The user is then expected to skim through these tables and locate the relevant cell values which is a tedious and time consuming task.",
"More recently, popular search engines made significant improvement in understanding natural language questions and finding the answers within passages, owing to the developments in transformer based machine reading comprehension (MRC) systems (Rajpurkar et al., 2016, 2018; Kwiatkowski et al., 2019; Pan et al., 2019; Alberti et al., 2019a).",
"One natural extension of these systems is to answer questions over tables.",
"These questions are broadly classified into two types: Lookup and Aggregation .",
"Lookup questions require returning exact strings from tables such as cell values whereas Aggregation questions are executed by performing an arithmetic operation on a subset of the column cells, such as Min(), Max(), Average() and Count() .",
"For look-up questions, the users can verify if the returned cell values from the table(s) are correct, while this is not applicable for Aggregation questions because a scalar value is returned as an answer.",
"Our primary focus in this paper is on Lookup questions since the answers are verifiable by users although our proposed techniques outperform the state-of-the-art (SOTA) approaches on both question types.",
"In this paper, we propose a new approach to table QA that independently predicts the probability of containing the answer to a question in each row and column of a table.",
"By taking the R ow and C olumn I ntersection (RCI) of these probabilistic predictions, RCI gives a probability for each cell of the table.",
"These probabilities are either used to answer questions directly or highlight the relevant regions of tables as a heatmap, helping users to easily locate the answers over tables (See Figure 1 for a question answered with the help of a heatmap).",
"We developed two models for RCI, called RCI interaction and RCI representation .",
"In order to evaluate these approaches, we also propose a weakly supervised MRC system as a strong baseline to identify / \"read\" relevant cells of a table.",
"In this baseline approach, we convert tables into passages and extract a relevant span of text within these passages.",
"The interaction model is designed to provide very high accuracy on finding cell values over tables for a given natural language question.",
"We demonstrate that without even using any specialized pre-trained models, we can achieve up-to 98% Hit@1 accuracy on finding cell values of tables for lookup questions from the WikiSQL benchmark.",
"Also, the interaction model outperforms the state-of-the-art transformer based approaches, TAPAS (Herzig et al., 2020) and TABERT (Yin et al., 2020), achieving 3.4% and 18.86% additional precision improvement on the standard WikiSQL benchmark, containing both lookup and aggregation questions.",
"While the interaction model yields very high accuracy on the benchmarks, the representation model has the advantage of pre-computing the embeddings for all tables in a corpus and storing them for online query processing.",
"Once a user query is received, the most relevant tables can be retrieved from a table retrieval system and relevant cell values can be highlighted using the existing embeddings of the tables, resulting in less computation per received user query, as opposed to running tables over expensive transformer architecture for every received query.",
"The specific contributions of this paper are as follows: An MRC based strong baseline for table QA task: We investigate a transfer learning approach by utilizing a fully supervised reading comprehension system built on top of a large pre-trained language model.",
"Specifi-cally, it is first fine-tuned on SQuAD then on Natural Questions and lastly trained on the table datasets.",
"The final model is used to identify relevant cells of a table for a given natural language question.",
"A transformer based interaction model for the table QA task: We propose a model for table QA task that concatenates a textual representation of each row (or column) to the text of the question and classifies the sequence pair as positive (the row/column contains the answer) or negative (the row/column does not contain the answer).",
"The proposed approach yields very high accuracy on our benchmarks, outperforming the SOTA models.",
"A transformer based representation model for the table QA task: We propose a representation model that builds vector representations of the question and each row (or column) to compare the resulting vectors to determine if the row (or column) contains the answer.",
"The proposed approach is preferred for efficiency purposes on online table retrieval systems since it enables materializing embeddings for existing tables and re-using them during online question answering over multiple tables.",
"In the following sections, we first review the prior work on QA systems over tables as well as table search from large corpora in Section",
"2. We then describe a weakly supervised machine reading comprehension (MRC) system as a baseline that is capable of answering questions over tables in Section",
"3. In Section 4, we introduce two models that decompose TableQA as the intersection between rows and columns of a table using a transformer architecture.",
"Experimental results are reported and discussed in Section 5 and finally Section 6 concludes the paper and discusses the future work.",
"QA from text: There is plenty of work on QA from plain text (Brill et al., 2002; Lin, 2007; Pasca, 2003; Kwiatkowski et al., 2019; Pan et al., 2019).",
"Typical strategies rely on token overlap between the question and passage text either based on a bag of word statistics or contextualized language model representations.",
"In either case, tabular structure is not leveraged to capture semantic relationships between rows and columns.",
"As we show in Section 5, these strategies are insufficient for answering questions over tables with high precision.",
"QA over tables: Our work mostly relates to the previous research on QA over tables (Pasupat and Liang, 2015; Sun et al., 2016; Dasigi et al., 2019).",
"They center around answering factoid questions and return the exact cell of a table that answers the query.",
"We briefly describe here how these works are different.",
"Pasupat and Liang (2015) assume access to the gold' table that contains the answer to the input question.",
"They build a semantic parser that parses the query to a logical form.",
"They likewise convert the table into a knowledge-graph and execute the logical form on it to get the answer.",
"A more advanced semantic parsing based methodology has been recently proposed by Dasigi et al. (2019).",
"This system is pre-trained on Wik-iTablesQuestions (Pasupat and Liang, 2015).",
"The proposed approach leverages an LSTM encoder-decoder model where tables are first converted to a knowledge-graph and word tokens in the questions are linked to table entities (columns and cells).",
"The questions and linked table entities are then encoded into representation vectors which are decoded to executable -DCS logical forms.",
"This logical forms are executed over a knowledge graph to get answer predictions.",
"Our approach is different, since we do not convert natural language questions into logical forms and execute them on tables.",
"Instead, we leverage transformer architectures pre-trained on large corpora and further trained on finding cell values on tables.",
"In Section 5, we show that we achieve significant improvement over this approach without using any semantic parser technique.",
"Sun et al. (2016) focus on the table retrieval problem over table corpora by leveraging the content of cell values and headers.",
"For a given query, they extract answers from millions of tables in the provided corpus.",
"They construct a unified chain representation of both the input question and the table cells and then find the table cell chain that best matches the question chain.",
"As opposed to this work, we primarily focus on answering questions over a single table rather than the retrieval of top-k tables from a corpus.",
"More recently, transformer based pre-training approaches have been introduced in TABERT (Yin et al., 2020) and TAPAS (Herzig et al., 2020) to improve accuracy for table QA.",
"TABERT has been pre-trained on 26 million tables and NL sentences extracted from Wikipedia and WDC WebTable Corpus (Yin et al., 2020).",
"The model can be plugged into a neural semantic parser as an encoder to provide contextual embeddings for tables.",
"Herzig et al. on the other hand, claim that semantic parsers incur an extra overhead of computing intermediate logical representations which can be avoided by leveraging fine-tuned models to answer questions over tables.",
"The model in TAPAS has been pre-trained on about 6 million tables extracted from Wikipedia content.",
"Our work is different from both TAPAS and TABERT .",
"First and foremost, our focus in this paper is not on pre-training a new model for table QA, but rather on leveraging the existing language models to find the connection between a question and table columns/rows with very high accuracy.",
"Second, our goal is to provide a heatmap over tables on an end-to-end table retrieval system to help users to quickly identify the regions of tables where the answers would most likely appear.",
"Because the transformer architectures are quite expensive to query, the representation model we propose radically reduces the computational overhead during online query processing.",
"Table search over the web: Another active research area in NLP is searching over web tables.",
"There are numerous search algorithms that have been explored such as keyword search (Ca-farella et al., 2008; Zhang and Balog, 2018; Venetis et al., 2011; Shraga et al., 2020), retrieve similar tables (Das Sarma et al., 2012), retrieve tables based on column names (Pimplikar and Sarawagi, 2012) and adding new columns to existing entity lists (Yakout et al., 2012; Zhang and Chakrabarti, 2013).",
"This thread of work focuses on retrieval of top-k tables with high precision from large corpora, rather than finding relevant rows and columns within tables.",
"We provide a brief description of our underlying Machine Reading Comprehension (MRC) model architecture, which we use as a strong baseline.",
"The architecture is inspired by (Alberti et al., 2019b; Pan et al., 2019; Glass et al., 2020) and direct interested readers to their papers for more details.",
"Our MRC model follows the approach introduced by (Devlin et al., 2019) of starting with a pre-trained transformer based language model (LM) and then fine-tuning MRC specific feed-forward layers on both general question answering datasets (SQuAD 2.0 and NQ) as well as the table specific question answers associated with the datasets in Section 5.",
"We use ALBERT (Lan et al., 2020) as the underlying LM similar to models which achieve SOTA on the SQuAD 2.0 leaderboard (Zhang et al., 2020b,a) at the time of writing.",
"More specifically, we show results starting from the weights and dimensions of the base v2 version (25M parameters) of the LM shared by (Lan et al., 2020).",
"We also experiment with the xxlarge v2 version (235M parameters) as well.",
"The input to the model is a token sequence ( X ) consisting of a question, passage, and special markers (a [ CLS ] token for answerability classification and [ SEP ] tokens to dileneate between the query and passage).",
"The input token sequence is passed through a deep Transformer (Vaswani et al., 2017) network to output a sequence of contextualized token representations H .",
"where W 1 , W 2 R 1 D e .",
"D e denotes the dimensionality of the embeddings ( 768 for base v2 ).",
"tb and te denote the probability of the t th token in the sequence being the answer beginning and end, respectively.",
"The model is trained using binary cross-entropy loss at each token position based on whether or not the annotated correct answer begins or ends at the t th token.",
"Unanswerable questions have their begin and end offsets set to the [ CLS ] token position.",
"At prediction time, a score is calculated for each possible span by summing the t j b and t i e at each possible i and j combination to identify the max scoring answer span.",
"The sum of the [ CLS ] b and [ CLS ] e is then subtracted from this max scoring answer span to produce a final score that can be used for thresholding (i.e., deciding whether to predict an answer or refrain from answering a ques-tion).",
"A few modifications are made in line with (Alberti et al., 2019b) to use MRC for the NQ dataset which introduces additional answer types [ short, long, yes, no, null ] .",
"Refer to the appendix for these details.",
"We fine-tune the model with the SQuAD 2.0 dataset and then the NQ dataset in line with (Pan et al., 2019; Glass et al., 2020), to produce a generic RC model comparable to the current SOTA.",
"We then train for an additional epoch on the subset of NQ which consists of short answer questions that need to be answered by lookup inside an HTML table.",
"This is about 5% of the total NQ data ( 15 , 500 question-answer pairs).",
"Note that in these cases, the input passage text consists of textual representation of tables (i.e., we introduce tabs between columns and new line characters between rows); so it is devoid of true row and column structure.",
"This pre-training and task adaptation strategy is inline with prior art (Gururangan et al., 2020) in adapting transformers.",
"Simpler pre-training strategies (e.g. relying only on SQuAD 2.0 or skipping the table specific epoch of training) were tried and found to provide similar, but generally worse, performance.",
"So those are excluded from Section 5 for brevity.",
"Finally, we fine-tune (i.e., train for an additional epoch) on the training examples (table-question pairs) associated with the appropriate evaluation data sets described in Section 5.",
"During this step we do not have access to exact span offsets in the ground truth annotations and, instead, use weak supervision by matching the first occurrence of the answer text within the textual representation of the table 2 .",
"The Row-Column Intersection model (RCI) is motivated by the idea of decomposing lookup Table QA into two operations: the column selection and the row selection.",
"Combining the predicted answer probability of each row and the probability of each column gives a score for all cells in the table.",
"The highest scoring cell may then be returned as an answer, or highlighting may be applied to the table to aid a user in locating the relevant information.",
"Unlike the pointer network of an adapted Machine Reading Comprehension system (described in Section 3), the RCI model always gives a ranked list of cells rather than answer spans that may cross cell boundaries.",
"We observe that the process of identifying the correct column is often about matching the column header and the type of values in the column to the expected answer type of the question.",
"For example in Table 1, the question has a lexical answer type of party' and the column header for the correct column is Party' and contains values that are political parties.",
"Identifying the correct row is often more difficult.",
"In the example given in Table 1, it is sufficient to match either of the names in the question to the 2 We provide the hyperparameters for the training process in the appendix.",
"value in the Name' column of the row.",
"Note that with weak supervision (Min et al., 2019) we do not know the correct row, so all occurrences of Pro-Administration' are considered correct.",
"Both the Row and Column models of RCI are sequence-pair classifiers.",
"The question is one sequence and the text sequence representation of the row or column is the second sequence.",
"We consider two approaches to the sequence-pair classification task in RCI: Interaction and Representation.",
"Interaction models use the self attention of a transformer over the concatenated two sequences.",
"This is the standard approach to sequence-pair classification tasks, e.g. textual entailment (Devlin et al., 2019) (Wang et al., 2018), in transformer based systems.",
"Representation models independently project each sequence of the sequence-pair to a vector, then compare those vectors.",
"Representation models are motivated by the need to improve efficiency for a practical system.",
"Considering the column classifier, the interaction model requires running a transformer over each question plus column sequence.",
"In contrast, the representation model can pre-process the collection of tables, producing a vector representation of each column for each table, independent of any query.",
"Then, at query time, the query is projected to a vector which is then combined with the vector for each column and classified with a single-layer network.",
"On the WikiTableQuestions-Lookup dev set, we see the column model's time drop from 40 seconds to 0.8 seconds on a K80 GPU when ten queries are batch processed at once.",
"Let a table with m rows and n columns be de-fined as a header, H = [ h 1 , h 2 , ..., h n ] and cell values V = [ v i,j ] , 1 i m, 1 j n .",
"A TableQA instance consists of a table, a question and a ground truth set of cell indices, T I J, I = 1 , 2 , ..., m, J = 1 , 2 , ..., n .",
"In principle, these ground truth cell positions could be annotated with the correct occurrences of the correct values.",
"However, this form of supervision may be too difficult to obtain.",
"We use weak supervision : the ground truth cell indices are found by matching the ground truth answer strings in the table.",
"To train the row and column classifier we find ground truth row and column indices: T r = { i | j : ( i, j ) T } T c = { j | i : ( i, j ) T } Although it is possible to navely construct a sequence representation of columns and rows by simply space separating the contents of each row or column, better performance can be achieved by incorporating the table structure in the sequence representation.",
"We focus on tables with a single header for columns, but this method could also be applied to tables with a hierarchical header, by first flattening the header.",
"representations are formatted as: S ri = n (cid:77) j =1 h ( h j ) v ( v i,j ) S cj = h ( h j ) m (cid:77) i =1 v ( v i,j )",
"Where indicates concatenation and the functions h and v delimit the header and cell value contents.",
"For h we append a colon token (:') to the header string, and for v we append a pipe token ( | ') to the cell value string.",
"The particular tokens used in the delimiting functions are not important.",
"Any distinctive tokens can serve since the transformer will learn an appropriate embedding to represent their role as header and cell value delimiters.",
"Considering again the example in Table 1, the first row would be represented as: Name : Benjamin Contee | Took office : 1789 | Left office : 1791 | Party : Anti-Administration | Notes / Events : | While the second column would have a sequence representation of: Took office : 1789 | 1791 | 1792 | 1793 | 1795 | Both the interaction and the representation models use the sequence representation described above.",
"In the case of the interaction model this sequence is then appended to the question with standard [ CLS ] and [ SEP ] tokens to delimit the two sequences.",
"This sequence pair is then input to a transformer encoder, ALBERT.",
"The final hidden state for the [ CLS ] token is used in a linear layer followed by a softmax to classify the column as either containing the answer or not.",
"In the representation model shown in Figure 2 the representations of the question ( r q ) and the j th column sequence ( r c ) are first computed independently.",
"The representations are taken from the vector that the transformer model produces for the [ CLS ] input token.",
"These vectors are then concatenated (indicated as : ) with their element-wise product (indicated as ) and the element-wise square of their differences.",
"The probability that this column is the target for the question is then given by a softmax over a linear layer.",
"Extension to aggregation questions: Although our focus is on lookup questions, the RCI model can be extended to aggregation questions with the addition of a question classifier. Another transformer is trained to classify the sequence-pair of the question and the table header into one of six categories: lookup, max, min, count, sum and average. The table header is relevant because a question such as How many wins do the Cubs have? can be lookup, count or sum depending on the structure of the table.",
"Taking a threshold on the cell level confidences of the RCI model and aggregating by the predicted question type produces the final answer, either a list of cells for lookup questions or a single number for aggregation questions.",
"row and column classifiers as well as the type of aggregation to train the question classifier. This type of supervision is available in the WikiSQL dataset, but not in WikiTableQuestions.",
"To evaluate these three approaches, we adapt three standard TableQA datasets: WikiSQL (Zhong et al., 2017), WikiTableQuestions (Pasupat and Liang, 2015) and TabMCQ (Jauhar et al., 2016). WikiSQL and WikiTableQuestions include both lookup questions as well as aggregation questions. As mentioned in Section 1, our primary focus in this paper is on lookup questions that require selection and projection operations over tables (i.e., identifying the row and column of a table with very high precision for a given natural language question). We are releasing the processing and evaluation code for the datasets to support reproducibility 3 . Table 2 gives a summary of these datasets.",
"In WikiSQL, the ground truth SQL query is provided for each question, so questions involving an aggregation operation can be automatically excluded. The lookup questions are 72% of the WikiSQL benchmark. WikiSQL has some questions ( < 3% ) with multiple answers. We treat these as a list of relevant items and use information retrieval metrics to measure the quality of a predicted ranked list of cells.",
"TabMCQ is a multiple-choice, lookup TableQA dataset over general science tables. We discard the multiple-choice setting and treat it as a standard open-ended QA task. However, some TabMCQ tables are very large. Of the 68 tables, 17 have more than 50 rows, with two tables containing over a thousand rows. We down-sample the rows that are not relevant for a given question, limiting the largest table size to 50 rows. Unlike the other two datasets, these tables are not Wikipedia tables and have an unusual format. A sample TabMCQ table is provided in the appendix.",
"WikiTableQuestions does not provide a defini-tive indication for what questions are lookup questions. To identify these questions we first filter questions with words indicating an aggregation, such as average', min', max', etc. These questions were further filtered manually to get the WikiTableQuestions-Lookup set.",
"Dataset Train Dev Test",
"and also used three existing models: IS-SP, provided by (Dasigi et al., 2019), TABERT (Yin et al., 2020) and TAPAS (Herzig et al., 2020). IS-SP is a semantic parsing based model trained on WikiTa-blesQuestions (Pasupat and Liang, 2015) dataset (See Section 2 for the details of this work). For building their model we used the code provided in (Gardner et al., 2020). For TABERT we trained the model for WikiSQL using the lookup subset, and for WikiTableQuestions we used the full training set and applied to the lookup subset. For TAPAS we used the trained BASE (reset) models 4 for WikiSQL and applied to the lookup subsets of the dev and test sets.",
"The MRC and MRC xxl models are based on Machine Reading Comprehension, using the base v2 and xxlarge v2 versions of ALBERT. Because this model returns a span rather than a cell prediction, we match each of the top-k span predictions to the closest cell, the cell with the lowest difference in its character offsets. In case multiple of the top-k predictions map to the same cell, these predictions are merged.",
"We also evaluate the two approaches to RCI: interaction (RCI inter ) and representation (RCI repr ). Both models use the base v2 version of ALBERT. Using the xxlarge v2 ALBERT, we also train another RCI interaction model, RCI xxl . For the representation model we found comparable performance on the column classifier but much lower performance on the row classifier. Therefore the RCI repr model uses a representation based classifier for columns, while still using the interaction classifier for rows. The RCI inter model uses interaction classifiers for both rows and columns. Because WikiSQL is the largest dataset by far, for TabMCQ and WikiTableQuestions we first train models on WikiSQL, then fine tune on the target dataset. This gives small but significant gains for TabMCQ but is critical to good performance on WikiTableQuestions.",
"All models except TAPAS produce a ranked list of top-k predictions. We evaluate these predictions using the metrics of Mean Reciprocal Rank (MRR)",
"and Hit@1. Mean Reciprocal Rank is computed by finding the rank of the first correct cell prediction for each question and averaging its reciprocal. If a correct cell is not present in the top-k predictions, it is considered to have an infinite rank. Hit@1 simply measures the fraction of questions that are correctly answered by the first cell prediction.",
"Table 3 shows the results on the lookup versions of WikiSQL, TabMCQ, and WikiTableQuestions. Both the interaction and the representation models of RCI outperform all other methods on WikiSQL, TabMCQ, and WikiTableQuestions. Using the representation model for the column classifier reduces performance by less than two percent on WikiSQL, and less than three percent on TabMCQ, but up to seven percent on WikiTableQuestions.",
"On two of the three datasets both RCI inter and the more efficient RCI repr outperform MRC xxl with far fewer parameters and computational cost. Similarly, RCI with ALBERT-base outperforms even the large version of TAPAS trained on WikiSQL, getting 94.6% Hit@1 compared to the 89.43% Hit@1 of TAPAS large .",
"Dev Test",
"We also compare the performance of the RCI model adapted to aggregation questions to the state-of-the-art TAPAS reported results on WikiSQL. We",
"use the evaluation script provided by TAPAS to produce exactly comparable accuracy numbers for the full WikiSQL dataset. Table 4 shows the RCI model gains over three percent, even without table specific pre-training. It also outperforms TABERT model by a large margin of 18.86%.",
"In Section 4 we described the method to transform a table into sequence representations of the rows and columns. We do an ablation study on the two larger datasets to understand the impact of incorporating table structure into the sequence representation relative to simply space separating the cell contents. Table 5 shows that we make moderate but significant and consistent gains with this approach, over two percent in Hit@1.",
"We also decompose the performance of the tested systems in terms of row and column accuracy. The top predicted cell, if wrong, could have the wrong row, the wrong column, or both. Table 6 shows that predicting the correct column is generally easier than predicting the correct row. An interesting exception occurs with MRC on the WikiSQL benchmark: the row prediction is more accurate than the column prediction. For the MRC system, the table is a sequence of column headers, followed by a sequence of rows. Since the table is serialized in row-major order, all of the relevant information for a row is present locally, while the information for columns is distributed through the table sequence representation.",
"The RCI inter model is the best at both tasks, with RCI repr having the same performance at the row level task, since it uses the same model for rows. The TabMCQ column level performance of MRC is within two percent of RCI inter , which may be",
"surprising, especially considering its performance on WikiSQL. TabMCQ tables are constructed in an unusual way that permits high column prediction performance for an MRC system. The rows in TabMCQ have the structure of sentences, which is helpful for a system trained on the SQuAD and NQ reading comprehension tasks (Refer to the appendix for a sample TabMCQ table).",
"To better understand the advantages and disadvantages of the Row-Column Intersection approach, we examine the 20 cases in the dev set of WikiTableQuestions-Lookup where RCI inter does not provide the correct answer in first position but MRC xxl does. We find nine cases where we could identify nothing that in principle prevents the RCI inter model from answering correctly. We find seven cases where multiple rows need to be considered together, while the RCI models always consider rows independently. WikiTableQuestions includes some questions like Table 7. Although the answer to this question is a cell in the table, it requires something like aggregation to answer. All rows for a given year must be checked to see if there is a 1st' in the Place column. This violates a key assumption of RCI: that rows may be examined independently. The final four cases also violate the assumptions of RCI. In two cases the answer is in the header of the table, while RCI assumes that it will be a cell. In one case the table extraction failed, and in the final case the question asks about the string length of one of the columns where the answer (8) happens to be in the table.",
"We also examine the cases where MRC xxl does not find the correct answer in first position but RCI inter does. The most frequent error, occurring in eight of the seventeen cases, is a near-miss'. Either MRC xxl chooses a value from the wrong column in the right row or a value from the row before or after. This is illustrated in Table 8, where MRC xxl selects a value near the desired date that",
"is easily confused with a location. In other cases a location from the previous or next row, which are adjacent in the input passage, can be selected instead.",
"We also conduct an error analysis of RCI xxl on the first 50 aggregation questions it misses on the dev set of WikiSQL. The largest category, with 24 cases, is correct answers by RCI xxl counted wrong by mistakes in the ground truth. Usually (23) the ground truth indicates that there should be COUNT aggregation when no aggregation is correct. For example, What is the rank of manager Rob Mc-donald? where Rank is one of the table columns is mistakenly indicated as a COUNT aggregation question.",
"The second largest category (9) occurs when the cells are ranked correctly, and the correct aggregation is predicted, but the threshold for choosing the cells to aggregate is too low (1) or too high (8).",
"Another common error (7) occurs when RCI xxl predicts a lookup question with the answer in a similar numeric column when aggregation is required. For example, the question How many votes were taken when the outcome was \"6th voted out day 12\"? is asked for a table with a Votes column.",
"RCI xxl predicts it as a lookup question with the answer (2-2-1 3-0) from this column, while the ground truth is a COUNT aggregation.",
"The final significant category (7) is cases of questions that are unanswerable.",
"This can occur because the table does not contain an answer or because the answer cannot be computed from a SQL query, such as when the answer is a sub-string of a cell.",
"The final three error cases are: a wrong column is selected (the episode number in series rather than the episode number in season); the question What is the result when the 3rd throw is not 8? is interpreted as What is the result when the 3rd throw is something other than 8? rather than the ground truth What is the result when the 3rd throw is literally not 8' ?; and non-Latin characters must be matched to select the correct row.",
"In this paper we propose two novel techniques, RCI interaction and RCI representation, to tackle the problem of locating answers over tables for given natural language questions.",
"These transformer based models are fine-tuned on ground truth tables to predict the probability of containing the answer to a question in the rows and columns of tables independently.",
"These probabilities are either used to answer questions directly or highlight the relevant regions of tables as a heatmap, helping users to easily locate the answers over tables.",
"Our experiments prove that the RCI model outperforms the state-of-the-art transformer based approaches pre-trained on very large table corpora (TAPAS (Herzig et al., 2020) and TABERT (Yin et al., 2020)), achieving 3.4% and 18.86% additional precision improvement on the standard WikiSQL benchmark including both Lookup and Aggregation questions.",
"The representation model, on the other hand, enables pre-processing the tables and producing the embeddings to store and further use during online query processing, providing significant efficiency advantages without compromising much on the accuracy of finding cell values in tables.",
"As for the future work, we plan to explore the exploitation of domain-specific taxonomies and embeddings generated for domain-specific corpora to tackle the problem of answering natural language questions over tables in domains such as finance, aviation and health care."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective"
] |
[
"We present READONCE Transformers, an approach to convert a transformer-based model into one that can build an information-capturing, task-independent, and compressed representation of text.",
"The resulting representation is reusable across different examples and tasks, thereby requiring a document shared across many examples or tasks to only be read once .",
"This leads to faster training and evaluation of models.",
"Additionally, we extend standard text-to-text transformer models to Representation+Text-to-text models, and evaluate on multiple downstream tasks: multihop QA, abstractive QA, and long-document summarization.",
"Our one-time computed representation results in a 2x-5x speedup compared to standard text-to-text models, while the compression also allows existing language models to handle longer documents without the need for designing new pre-trained models.",
"Transformer-based large scale language models (LMs) (Radford et al., 2018; Devlin et al., 2019) are task-independent models that are surprisingly effective when directly fine-tuned on many different end-tasks (Rajpurkar et al., 2016; Wang et al., 2019b,a).",
"However, this approach relies heavily on using end-task supervision to learn to solve two sub-problems simultaneously: extract information 1 from an input document D and solve the end-task (e.g., answer a question about D ).",
"This incentivizes LM-based models to learn to extract only task-specificand even example-specificinformation when fine-tuned on the end-task.",
"For example, a Question Answering (QA) model may learn to only extract the answer from D given the input question.",
"This strategy, while effective on many datasets, is also inefficient.",
"First, it requires model's pretrained weights to be fine-tuned separately for each end-task, even though the sub-problem of gathering the information content of the input document D is shared across tasks.",
"Second, each D must be re-read from scratch in the context of each example (e.g., once for each question) even when many examples share D .",
"Not only is this computational redundancy undesirable, slow inference can quickly become a bottleneck in deployed, real-time systems if models with billions of parameters must re-read D for every input query.",
"Inspired by humans' ability to read a document and extract key information from it without having to know the use case in advance, we ask the following question: Can we use transformer-based LMs to build compressed representations of text that are exampleand task-independent, and hence reusable?",
"Further, can we extend text-to-text transformer architectures to consume such representations in conjunction with text?",
"Prior representation learning approaches attempt to capture the meaning of sentences into a continuous vector (Conneau et al., 2017; Kiros et al., 2015; Reimers and Gurevych, 2019).",
"While they have been effective on downstream classification tasks, it is unclear whether they can capture the information content of entire paragraphs.",
"Moreover, these approaches focus on building fixed-length representations that are used as the input features for task-specific classifiers.",
"In contrast, our goal is to",
"(a) use transformer-based LMs to build compressed representations that scale with the document size , and",
"(b) combine them with example-specific text inputs to produce the more general text output.",
"To this end, we propose an approach to convert any encoder-decoder based transformer LM (such as BART (Lewis et al., 2020)) into a new architecture termed READONCE Transformer, with two key parts: (1) a Document Encoder that reads documents only once to create compressed, information-capturing, reusable representations that we refer to as READONCE Representations (2) a Representa-tion+Text Model that consumes these document representations together with taskand example-specific plain text (e.g., a question) to produce text output (e.g. an answer).",
"To ensure that our compressed representations capture the key facts, we use supervision from two factoid QA datasets, SQuAD (Rajpurkar et al., 2016) and UnsupervisedQA (Lewis et al., 2019) to train READONCE Transformers.",
"To solve an end-task, we only need to compute the READONCE Representations of the documents once and only train the Representa-tion+Text Model to perform the end-task.",
"Our experiments demonstrate that these representations are more effective at capturing information compared to baseline approaches.",
"Our representations also generalize to other tasks such as multihop QA (Yang et al., 2018), abstractive QA (Ko-cisky et al., 2018), and summarization (Narayan et al., 2018).",
"Since READONCE Representations are computed only once, we can train and infer with models 2x-5x faster than standard approaches, with only a marginal drop in accuracy (about 3 F1 points on QA and 4 Rouge-L points on summarization for a 2x speedup).",
"Moreover, the compression ratio parameter K of our representations provides an easy way to trade off computation time with accuracy.",
"Specifically, our analysis suggests that the resulting model has a computation cost of roughly 1 / 2 R + 3 / 4 K 2 of the base LM, where R is the frequency of document reuse.",
"Additionally, our compressed representation enables us to efficiently combine information from long (or multiple) documents enabling more accurate long-document summarization (Cohan et al., 2018) without needing costly pre-training of new LMs (Beltagy et al., 2020; Zaheer et al., 2020).",
"Representation learning approaches are commonly used to extract fixed-length sentence embeddings (Conneau et al., 2017; Kiros et al., 2015; Wang et al., 2020) from variable-length text inputs.",
"Such fixed length representations have enabled the development of simpler downstream models that do not have to deal with the variable-lengths of textual inputs.",
"However, these representations have mainly been used for simple classification tasks on short input texts (Bowman et al., 2015; Wang et al., 2019b).",
"The word-level representations from RNNs or transformers are also variable-length, but uncompressed.",
"While such representations have been re-used with RNNs (Peters et al., 2018) and are easy to combine with text input, it is not immediately clear how to combine representations from transformers with text, which is what we propose.",
"Recent work (Reimers and Gurevych, 2019; He et al., 2020; Artetxe and Schwenk, 2019; Karpukhin et al., 2020) has tried building document-embedding using large-scale language models as well.",
"However these fixed-length representations have mostly been built to identify similar documents (Reimers and Gurevych, 2019; Karpukhin et al., 2020) and are not used directly for QA.",
"QuASE (He et al., 2020), also used question-answering supervision for transfer learning but do not produce re-usable representations.",
"Artetxe and Schwenk (2019) learned multi-lingual sentence embeddings that may be able to capture the knowledge present in a sentence but they were designed for BiLSTMs.",
"Some large-scale LMs have been especially designed to handle long documents (Yang et al., 2019; Beltagy et al., 2020; Zaheer et al., 2020) too but need to be pre-trained on large corpora, whereas we can use any pre-trained LM.",
"Aspects of our work also bears resemblance to domain adaptation (Daume III and Marcu, 2006), transfer learning (Pan and Yang, 2010) and multi-task learning (Caruana, 1993) but focuses on learning information-capturing representations from transformer-based models that has not been explored by prior work.",
"While model distillation (Hinton et al., 2015) can also result in speedups, these techniques are orthogonal and can be easily incorporated in our framework (as we show in our experiments).",
"Our goal in this work is to identify the optimal architecture to extract information-capturing reusable representations.",
"At the same time, we also need to find the optimal architecture to use such representation in conjunction with text inputs.",
"So at a high level (as shown in Fig. 1), we need to develop two systems: (1) A model to compute the representation, Document Encoder and (2) A general model for tasks that can consume vector representations and text, Representation+Text Model.",
"Given the recent success and generality of encoder-decoder models (Radford et al., 2018; Raffel et al., 2020; Lewis et al., 2020), we focus on developing models for such an architecture.",
"We present the potential choices for each model, with the final model used in our system indicated by a *.",
"Given an encoder-decoder model, there are different ways to compute representations for a document d with tokens { t 1 , . . . , t n } .",
"We focus on using the output representation generated by the encoder, represented with h i for each token t i .",
"Fixed Length Aggregation.",
"The most common approach is to extract a single representation from a sequence of vector (Kiros et al., 2015; Conneau et al., 2017).",
"While this can be a very compact representation of a document, it tends to be very lossy, especially when dealing with large documents.",
"As a result, these representations are mainly used for classification (Conneau et al., 2017; Reimers and Gurevych, 2019) or retrieval (Karpukhin et al., 2020), and have not been shown to capture the content of the document.",
"E.g, InferSent (Conneau et al., 2017) presented a self-attentive approach to extract sentence embedding using: r = (cid:88) i U ( h i ) h i (1) where U is a function that computes a scalar attention over each h i .",
"To reduce information loss, we extend these models to produce M representation vectors by learning M sets of parameters j for j { 1 , . . . , M } , i.e., r j = (cid:80) i U j ( h i ) h i where U j ( h i ) = e j h i / (cid:80) i e j h i .",
"Special Token Representations.",
"With the ad-vent of transformer models, another common approach is adding a special [CLS] (Radford et al., 2018; Devlin et al., 2019) or <s> (Liu et al., 2019) token to the context.",
"The output representation of this special token can then be used as inputs to classifiers and other down-stream models.",
"Again, a single representation can be lossy, so we generate M representations by inserting multiple special tokens.",
"We can dynamically adjust the number of special tokens based on the input length to produce a variable-length representation.",
"To achieve a compression-ratio of 1 k , we insert Nk special tokens and use their representations.",
"We consider two ways 2 of inserting special tokens into the context: (1) Suffix : Add them at the end of the context 3 (2) Interleave : Add them after every k tokens.",
"While the first approach preserves context continuity, the latter might more directly incentivize the model to capture local context.",
"Sliding Window Aggregation*.",
"We apply the idea of aggregating single-vector representations to generate a variable-length representation.",
"We apply an aggregation function F over sliding windows of size W tokens to capture the local context of the window (akin to CNNs).",
"For a stride length of S , this would result in representation vectors: r j = F ( { h S j , , h S j + W } ) (2) where F { , , } corresponds to mean-pooling, linear weighting (as described in Eqn.",
"(1)), and max-pooling, respectively.",
"Figure 2 shows how we would compute these representations using a window-size of W=2 with no overlap (i.e. S=2) and the linear weighting function.",
"The resulting READONCE Representations would have M = N/ 2 vectors where N is the number of tokens in the input.",
"SentenceBERT Baseline.",
"For completeness, we also use an existing transformer-based SentenceBert model (Reimers and Gurevych, 2019) 4 to compute the representation of each sentence in the document.",
"Since the space of these representation might 2 More complex designs such as special token embeddings, position embeddings, and indicator features are left as future work.",
"3 Prefixing special tokens generally worsened performance.",
"4 We use the BERT-Large NLI tokens which performed better than the NLI-STSB representations in our experiments Kiss (cid:29288) (cid:29288)(cid:29288) (cid:29288)(cid:29288)(cid:29288) and Tell is a Archer .",
"be different, we learn a single-layer feedforward network to project the representations into the right space.",
"For fair comparison to models with variable compression ratio k , we also use SentenceBERT representations for a sliding window of k tokens.",
"Next, we present our modification to downstream task models to use both text and our generated READONCE Representations.",
"Since most NLP tasks can be re-formulated as a text-to-text problem (Radford et al., 2018; Raffel et al., 2020), we focus on extending text-to-text encoder-decoder models to a (vec+text)-to-text model.",
"Append to Encoder*.",
"Since the transformer block in an encoder can handle any input length in each layer, one possible approach is to append the representations to the L th layer of the encoder.",
"This allows the model to focus on parsing the input example text(e.g., question) in the L-1 layers followed by focusing on answering the question in the remaining layers.",
"We show this model in Figure 3 where the encoder only processes the Q tokens of the question for the first L layers.",
"Once the MREADONCE Representations are added to the L th layer, all the subsequent layers produce M + Q vectors by attending over both the representations and text.",
"Finally an unmodified decoder produces the output answer.",
"Modify Transformer Block Attention.",
"Rather than just modifying the input, we consider an alternate approach of modifying the transformer block itself.",
"Similar to PlotMachines (Rashkin et al., 2020), we view the representation as a memory that the self-attention block can attend over (in addition (cid:29288)(cid:29288)(cid:29288) La y e r L Which Encoder (cid:29288)(cid:29288)(cid:29288) (cid:29288)(cid:29288)(cid:29288) (cid:29288)(cid:29288)(cid:29288) ReadOnce Representation Movie (cid:29288)(cid:29288)(cid:29288) ?",
"to the input text).",
"We modify the self-attention blocks in both the encoder and the decoder 5 to use two separate attention modules for both of these input types and averages the vectors.",
"6 With this design, ideally the Representation+Text Model will gain extra capacity to model the interaction between the representation and the input text.",
"Given the overall architecture of such a system (shown in Fig. 4), we next focus on training this model to produce READONCE Representations that capture the information present in the document.",
"While prior representation learning models have often focused on classification tasks, we instead use the reading comprehension QA task to ensure this information-capturing property.",
"If a model is able to use just the READONCE Representations to answer the questions grounded in the document, the representations would contain the information needed to answer such questions.",
"The key question here is: Which QA datasets are most suitable for training a compact yet information-capturing document representation?",
"Low-level semantic QA datasets (Michael et al., 2018; He et al., 2015) don't allow for any compression as the questions require the knowledge about every word in the input sentence.",
"More complex multi-hop QA datasets such as HotpotQA (Yang et al., 2018) are also not appropriate, as they focus on learning to reason in addition to capturing the information.",
"Shallow reading comprehension tasks provide a sweet spot between these two extremes, as extracting key information from the given document is sufficient to answer the questions.",
"Further, unlike semantic QA tasks, the questions only focus on the key facts mentioned in a document, which can be captured in a compressed representation.",
"We 5 Only modifying the encoder or decoder resulted in slightly lower performance.",
"To verify the generality of the READONCE Representations, we train models to perform multi-hop reasoning, abstractive QA and summarization using our learned representations.",
"Specifically, we freeze the Document Encoder model and use it to generate the representations for documents.",
"We further fine-tune the Representation+Text Model on the downstream task to produce the output label given the READONCE Representations and any example-specific input.",
"We first evaluate the different potential architectural choices for extracting and using document representations discussed in 3.1 and 3.2, respectively.",
"While our main interest is in learning effective representations, we also need to find the optimal Representation+Text Model architecture that can consume the representation.",
"We train the entire model on the factoid QA task to ensure that the document representations do capture factual knowledge.",
"We primarily use the SQuAD reading-comprehension dataset (Rajpurkar et al., 2016) containing more than 100,000 crowd-sourced factoid questions.",
"We further augment this dataset with about 500,000 rule-based questions from the UnsupervisedQA (UQA) dataset (Lewis et al., 2019).",
"This increases the size of the training dataset while also introducing question diversity.",
"To avoid these automatically generated questions overwhelming training, we ensure that the same number of questions are selected from both the datasets in each batch (by duplicating SQuAD ques-tions).",
"In the same vein, we evaluate each model based on their performance on the SQuAD task.",
"7 Unless otherwise mentioned, we use the BART-Large model in all our experiments, and optimize the model with cross-entropy loss.",
"We set the learning rate to 1e-5 for the weights initialized from the BART model, and to 1e-4 for randomly initialized newly added weights, which is shown beneficial in Peters et al. (2019).",
"For other hyper-parameters, we follow Lewis et al. (2020).",
"We ran all the experiments on RTX 8000 with 48GB GPU memory.",
"All experiments did not use the complete GPU memory, e.g. experim We kept the batch size and gradient accumulation steps constant (both at 8) across different compression ratios.",
"To be able to evaluate the representations, we need to first select the architecture of the model consuming these representations.",
"We explore the different choices for the Represen-tation+Text Model model discussed in 3.2, assuming the representation is generated by a simple Document Encoder model: Mean aggregation over a Sliding Window with both window size and stride being 8 tokens.",
"The results are shown in Table 1.",
"We see that appending READONCE representations too early (L=1) or too late (L=12) in the encoder stack is not as effective as appending about half-way (L=6).",
"8 We suspect that appending too 7 The scores on UQA correlate well with the scores on SQuAD, with close to 90 F1 for most models.",
"8 We also experimented with L=3 and L=9, and didn't find any significant gains.",
"early does not allow the model to focus on understanding the question, whereas appending too late does not leave enough room for cross-attention between the question and representations.",
"Modifying the transformer block to attend over these representations results in a reasonable F1 score on SQuAD, but it is still outperformed by our simple Append architecture.",
"Hence, for the rest of this work, we stick to the simpler architecture of appending the representation at the 6 th layer, denoted Append(L=6).",
"Given the Representation+Text Model model architecture chosen above, we now explore potential Document Encoder architectures to extract READONCE Representations.",
"For a fair comparison, we ensure that all our evaluated representations use, on average across a dataset, the same number of vectors to represent documents.",
"Table 2 presents EM and F1 scores on SQuAD for the various architectural choices discussed in 3.1.",
"The top 3 rows explore the sliding window architecture with both window size and stride length of 8 (i.e., no overlap between windows), with the three different aggregation functions mentioned earlier.",
"We see that both the mean and the learned weighted sum have comparable performance on this task, and outperform the max-pooling function .",
"We also evaluate the impact of increasing the overlap between windows by increasing the window size (not changing the stride length keeps the average number of vectors constant).",
"For the learned weighted sum function, this results in a 5 point F1 drop, possibly due to the aggregation function having to operate over a larger window.",
"9 We next evaluate the approaches inspired by prior work where we add special tokens and use the representations of these tokens.",
"For the BART model, we use a newly added [CLS] token as our special token.",
"We see from Table 2 that neither appending these tokens at the end nor interleaving them in the input results in representations comparable to the sliding window based approaches.",
"10 The sliding window representations outperform the pre-trained sentence-based representations from SentenceBERT irrespective of the number of vectors used.",
"11 Finally, if we fix the representation length to 21 vectors (computed based on the average token length of SQuAD: 163.7), the learned representations are still not as effective.",
"Based on this set of experiments, we use the sliding window architecture for the Document Encoder with learned weighted sum as the aggregation function, and append these representations to the 6 th layer in the final task-dependent Representa-tion+Text Model.",
"Next, we evaluate the quality of our representations by using them on three downstream tasks, different from the tasks READONCE Transformers are trained on, demonstrating faster training and inference.",
"We then show the benefit of using our representation when documents are much longer than the token limit of the underlying LM.",
"Tasks: We consider three end-tasks, extractive QA, summarization, and abstractive QA, to evaluate our system using the following datasets: (1) HotpotQA (Yang et al., 2018), a multi-hop reasoning extractive QA dataset.",
"(2) XSUM (Narayan et al., 2018), an abstractive News summarization dataset (3) NarrativeQA (Kocisky et al., 2018), an abstractive QA dataset where answers are not spans 9 We also compared W=8, S=2 with W=2, S=2 in our early experiments and notice a similar trendthe smaller sliding window performs better.",
"10 Special token prefix scored similar to the Suffix model.",
"11 Even when the SlidingWindow approach is limited to M = N/ 32 vectors, it achieves a higher F1 score (52.4) than SentenceBERT.",
"from the input document.",
"More details about these datasets and metrics provided in App.",
"B Baselines: We compare READONCE Transformers to BART-based QA models that use the document text directly to answer the given question.",
"Since these models use text directly without any lossy compression, their score is best viewed as an upper bound for any representation-based BART model, including ours.",
"We train the BART model to generate the answer given the entire document and question (we use Summary as question for XSUM).",
"In addition to BART-Large, we evaluate two smaller models: BART-Base and DistilBART (Shleifer and Rush, 2020).",
"Since our representations were trained on SQuAD and UQA, we also first fine-tune all our BART models on the same datasets.",
"READONCE Models: We freeze the parameters of the Document Encoder to generate the representations for all the documents in the datasets.",
"We then use these representations with our Represen-tation+Text Model, which is further fine-tuned on each end-task.",
"To evaluate the impact of our pretraining on QA datasets, we compare our model to the READONCE architecture initialized with the BART model weights, READONCE .",
"To illustrate the architecture-independence of our approach and orthogonality to traditional compression methods, we also train and evaluate READONCE models using the BART-Base and DistilBART models.",
"These models were also first trained on SQuAD +UQA datasets to learn the document representation.",
"See App.",
"C for more details.",
"Since our Representation+Text Model can handle a variable number of representation vectors, we can change this compression ratio , on-the-fly, without having to change the model architecture.",
"Specifically, we can use a stride-length of K in our Document Encoder to generate representations that are 1 /K th of the input length, and then feed them to a downstream model.",
"By reducing K , we can reduce the compression ratio and improve the model accuracy, at the cost of increased runtime.",
"Interestingly, we discovered that we don't even need to re-train Document Encoder for each value of K .",
"We can achieve a performance comparable to encoders trained individually for each value of K , by using the Document Encoder trained on K = 8 and only varying K during the fine-tuning step.",
"First, we assess the ability of READONCE Representations to capture document information as compared to using the original document text.",
"As shown in Table 3, our framework at K=2 is about 2x faster than BART-Large while being only 3 F1 and 4 Rouge-L points behind this model with full access to the text.",
"This demonstrates that READONCE Representations do capture most of the relevant information in the document.",
"The different compressed models can also result in smaller (Dis-tilBART) or comparable (BART-Base) speed-ups, but (1) our accuracy vs speed trade-off is more easily controllable via K and (2) we can apply our framework on these models to achieve similar speedups.",
"12 Architecture HotpotQA Narr.QA XSUM F1 | sec.",
"Lastly, we note that the READONCE system, which simply uses the BART model parameters, is about 6 F1 and 14 Rouge-L points behind our model with learned representations.",
"This shows that our model does utilize the factoid questions to learn to extract meaningful representations without our training, the representations obtained from the pre-trained models are not as effective.",
"13 12 While more recent LMs can outperform BART (e.g. Pegasus (Zhang et al., 2020) for summarization), we believe similar tradeoffs can be achieved by applying our framework on these newer models.",
"13 We also observe drops in score when using the BART model parameters in only the Document Encoder or only the Representation+Text Model.",
"One key advantage of READONCE Representations is that the model needs to read the document only once , and can reuse pre-computed representations for multiple examples or even multiple tasks.",
"Specifically, if a document is repeated across R examples (the replication factor ) and we use a compression ratio of K , our computation cost per question is roughly only ( 1 / 2 R + 3 / 4 K 2 ) relative to a baseline seq2seq model (cf. App. C.3 for an analysis).",
"In other words, the higher the replication factor R or the compression ratio K , the higher the speedup achieved via READONCE Representations.",
"Our model exhibits a speedup of 2x-5x in training time compared to the different BART architectures (Figure 5).",
"Similarly, we observe a 2x-3x speedup in the inference time (as shown in Figure 6), which again plateaus out at K=8.",
"Note that the time reported for our model includes the cost of reading READONCE Representations from disk as well as some fixed costs.",
"These costs form a larger fraction of the overall time for faster models.",
"Hence, while our speedups do not exactly match up to the theoretical analysis, the empirical trends are as expected: we see larger speedups on the NarrativeQA dataset which has a higher replication factor R .",
"In general, the R value for our datasets (e.g., R=29.7 for NarrativeQA) is within the range of other datasets (e.g., R=9.4 for NewsQA and R=13.9 for DROP).",
"Note that even when R=1 (e.g., XSUM), we observe a speedup due to the compression ratio K. Figure 5: Training time (seconds) per batch.",
"varying the values of the compression ratio K. As shown in Figure 7, across all three of our datasets, as the value of K increases, the model's accuracy goes down due to increased compression but so does the training time.",
"As compared to the upper-bound BART-Large model, we see a large gain in speed when K=2 with diminishing gains as K reaches",
"8. 1.0 2.0 3.0 4.0 5.0 6.0 Training Time per Batch 20 30 40 50 60 70 80 90 F 1 K=2 K=4 K=8 K=16 UB K=2 K=4 K=8 K=16 UB K=2 K=4 K=8 K=16 UB HotpotQA (F1) Narr.QA (R-L) XSUM (R-L) 20 30 40 50 60 70 80 90 R-L Figure 7: The training time (seconds per batch) vs performance trade-off achieved by the READONCE model with different values of K on our three evaluation tasks.",
"Compressing document representations also enables the downstream model to reason over documents longer than its maximum token length limit T. For example, we can compute representations of document chunks with upto T tokens each and concatenate them together.",
"Since these representations do not rely on any position embeddings in Representation+Text Model, theoretically we can use as many representation vectors as needed.",
"Given GPU memory limits, lets assume we can only accommodate documents upto length T. Given a compression ratio K, we can compute READONCE Representations for K such length-T chunks, increasing the capacity of our downstream model to T*K.",
"14 For simplicity, we ignore the question as it tends to be much shorter than T. To assess the impact of increased model capacity, we evaluate our learned representations on the long document summarization task PubMed (Co-han et al., 2018).",
"15 We follow Cohan et al. (2018) and only include the first 4 sections from each document (average length=2270 tokens).",
"We vary the memory budget from T=512 to T=256 and compare our approach to two BART seq2seq baselines: a simple truncation baseline with T/ 4 tokens from each section, and a sliding-window baseline often used in QA models for summarization extended here by concatenating summaries from length-T chunks of the input document.",
"For the READONCE Transformer with a compression ratio of K, we can accommodate K*T/4 tokens per section, resulting in a total of T representations from the 4 sections.",
"We choose to obtain these T representations using K/2 chunks from each section, with each chunk containing T/2 tokens.",
"16 37.8 35.5 33.8 37.0 36.8 36.4 36.6 34.0 36.5 36.9 34.6 37.2 Figure 8: Accuracy of models under different maximum window length assumptions on PubMed dataset.",
"ROUGE-L scores of these models are depicted in Figure",
"8. As we reduce T for the underlying transformer model from 512 to 256, the score of the baseline BART model drops to 35.5 ROUGE-L.",
"When used with the sliding window technique, the performance is even worse, likely due to the naive aggregation of the summaries.",
"Our approach, on 14 If we allow overlap of O tokens between chunks, the capacity changes to T*K O*(K1) 15 We also evaluate NarrativeQA, see App.",
"the other hand, concatenates document representations, allowing the downstream model to build a coherent summary.",
"We see the ROUGE-L score only drops to 36.6 when K=2 (with model capacity dropping from 1024 to 512 tokens) and a much smaller drop from 37.0 to 36.5 when K=8 (with model capacity dropping from 3520 to 1472 tokens).",
"This simulation shows that concatenating READONCE Representations is a simple yet effective way to increase the capacity of existing models.",
"This work introduced READONCE Transformers, a novel approach for using large scale transformer-based language models to both build and consume reusable document representations.",
"Akin to humans' ability to read a document and extract useful information without knowing the end-use, READONCE Representations are compact, information-capturing document representations that can be pre-computed once, in a taskand example-independent fashion.",
"Our results on extractive QA, summarization, and abstractive QA tasks demonstrate that using READONCE Representations, in lieu of re-reading document text in the context of every example, results in substantially faster training and inference, at a modest cost in accuracy.",
"The READONCE framework also offers an easy way to control the trade off between speed and accuracy (via the compression ratio parameter), and enables the use of standard transformer architectures on long documents beyond the model's token limit.",
"Identifying the ideal compact document representations in our controlled setting opens up the possibility of efficient open-domain QA, where models retrieve and reason directly over these representations.",
"We leave an exploration of the training of the retrieval function, often with only answer supervision and ideally in an end-to-end setting, to future work.",
"We thank Dirk Groeneveld for providing the output of the Quark system for HotpotQA and the Beaker team for their support with the experiments."
] | [
"method",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"method",
"objective",
"objective",
"abstain",
"method",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other"
] |
[
"This paper explores the task of Difficulty-Controllable Question Generation (DCQG), which aims at generating questions with required difficulty levels.",
"Previous research on this task mainly defines the difficulty of a question as whether it can be correctly answered by a Question Answering (QA) system, lacking interpretability and controllability.",
"In our work, we redefine question difficulty as the number of inference steps required to answer it and argue that Question Generation (QG) systems should have stronger control over the logic of generated questions.",
"To this end, we propose a novel framework that progressively increases question difficulty through step-by-step rewriting under the guidance of an extracted reasoning chain.",
"A dataset is automatically constructed to facilitate the research, on which extensive experiments are conducted to test the performance of our method.",
"The task of Difficulty-Controllable Question Generation (DCQG) aims at generating questions with required difficulty levels and has recently attracted researchers' attention due to its wide application, such as facilitating certain curriculum-learning-based methods for QA systems (Sachan and Xing, 2016) and designing exams of various difficulty levels for educational purpose (Kurdi et al., 2020).",
"Compared to previous QG works which control the interrogative word (Zi et al., 2019; Kang et al., 2019) or the context of a question (Liu et al., 2020, 2019a), few works have been conducted on difficulty control, as it is hard to formally define the difficulty of a question.",
"To the best of our knowledge, Gao et al. (2019) is the only previous work of DCQG for free text, and defines question difficulty as whether a QA model can correctly answer it.",
"This definition gives only two difficulty levels and is mainly empirically driven, lacking interpretability for what difficulty is and how difficulty varies.",
"In this work, we redefine the difficulty level of a question as the number of inference steps required to answer it , which reflects the requirements on reasoning and cognitive abilities (Pan et al., 2019).",
"Existing QA systems perform substantially worse in answering multi-hop questions than single-hop ones (Yang et al., 2018), also supporting the soundness of using reasoning hops to define difficulty.",
"To achieve DCQG with the above definition, a QG model should have strong control over the logic and reasoning complexity of generated questions.",
"Graph-based methods are well suited for such logic modelling (Pearl and Paz, 1986; Zhang et al., 2020).",
"In previous QG researches, Yu et al. (2020) and Pan et al. (2020) implemented graph-to-sequence frameworks to distill the inner structure of the context, but they mainly used graphs to enhance document representations, rather than to control the reasoning complexity of questions.",
"In this paper, we propose a highly-controllable QG framework that progressively increases diffi-culties of the generated questions through step-by-step rewriting.",
"Specifically, we first transform a given raw text into a context graph, from which we sample the answer and the reasoning chain for the generated question.",
"Then, we design a question generator and a question rewriter to generate an initial simple question and step-by-step rewrite it into more complex ones.",
"As shown in Fig. 1, Tom Cruise is the selected answer, and Q 1 is the initial question, which is then adapted into Q 2 by adding one more inference step (i.e. N 1 N 2 ) in the reasoning chain.",
"That is, it requires to infer Top Gun is the film directed by Tony Scott before answering Q 1 .",
"Similarly, we can further increase its difficulty level and step-by-step extend it into more difficult questions (i.e., Q 3 , Q 4 and Q 5 ).",
"To train our DCQG framework, we design effective strategies to automatically construct the training data from existing QA datasets instead of building one from scratch with intensive human efforts.",
"Specifically, we utilize HotpotQA (Yang et al., 2018), a QA dataset where most questions require two inference steps to answer and can be decomposed into two 1-hop questions.",
"Thus, we get the dataset that contains 2-hop questions and their corresponding 1-hop reasoning steps.",
"Having learned how to rewrite 1-hop questions into 2-hop ones with this dataset, our framework can easily extend to the generation of ( n +1)-hop questions from n -hop ones only with a small amount of corresponding data, because the rewriting operation follows rather certain patterns regardless of the exact value of n , as shown in Fig. 1. Extensive evaluations show that our method can controllably generate questions with required difficulty, and keep competitive question quality at the same time, compared with a set of strong baselines.",
"In summary, our contributions are as follows: To the best of our knowledge, this is the first work of difficulty-controllable question generation, with question difficulty defined as the inference steps to answer it; We propose a novel framework that achieves DCQG through step-by-step rewriting under the guidance of an extracted reasoning chain; We build a dataset that can facilitate training of rewriting questions into more complex ones, paired with constructed context graphs and the underlying reasoning chain of the question.",
"Deep Question Generation Most of the previous QG researches (Zhou et al., 2017; Pan et al., 2019; Liu et al., 2020) mainly focused on generating single-hop questions like the ones in SQuAD (Rajpurkar et al., 2016).",
"In the hope that AI systems could provoke more in-depth interaction with humans, deep question generation aims at generating questions that require deep reasoning.",
"Many recent works attempted to conquer this task with graph-based neural architectures.",
"Talmor and Be-rant (2018) and Kumar et al. (2019) generated complex questions based on knowledge graphs, but their methods could not be directly applied to QG for free text, which lacks clear logical structures.",
"In sequential question generation, Chai and Wan (2020) used a dual-graph interaction to better capture context dependency.",
"However, they considered all the tokens as nodes, which led to a very complex graph.",
"Yu et al. (2020) tried to generate multi-hop questions from free text with the help of entity graphs constructed by external tools.",
"Our work shares a similar setting with Yu et al. (2020), and we further explore the problem of how to generate deep questions in a more controllable paradigm.",
"Difficulty-Controllable Question Generation DCQG is a relatively new task.",
"Gao et al. (2019) classified questions as easy or hard according to whether they could be correctly answered by a BERT-based QA model, and controlled the question difficulty by modifying the hidden states before decoding.",
"Another research on QG for knowledge graphs (Kumar et al., 2019) estimated the question difficulty based on popularity of the named entity.",
"They manipulated the generation process by incorporating the difficulty level into the input embedding of the Transformer-based decoder.",
"In our work, we control the question difficulty based on the number of its reasoning hops, which is more explainable.",
"Question Rewriting It is another emerging trend in the recent researches, demonstrating benefits to both QG and QA tasks.",
"With rewriting, QG models produced more complex questions by incorporating more context information into simple questions Figure 2: An overview of our proposed framework.",
"(Elgohary et al., 2019; Vakulenko et al., 2020), and QA pipelines could also decompose the original complex question into multiple shorter questions to improve model performance (Min et al., 2019; Khot et al., 2020).",
"Given input context text C and a specific difficulty level d , our objective is to generate a (ques-tion, answer) pair ( Q , A ) , where A is a sub-span of C and Q requires d -hop reasoning to answer.",
"Fig. 2 and Algorithm 1 give an overview of our proposed framework.",
"First, we construct a context graph GCG corresponding to the given context, from which a subgraph GT is selected to serve as the reasoning chain of the generated question.",
"Next, with the reasoning chain and other contextual information as input, a question generator (QG Initial ) produces an initial simple question Q 1 .",
"Then, Q 1 is fed to a question rewriting module (QG Rewrite ), which iteratively rewrites it into a more complex question Q i ( i = 2 , 3 , . . . , d ) .",
"In what follows, we will introduce the whole generation process in more details.",
"Context Graph Construction We follow the method proposed by Fan et al. (2019) to build the context graph GCG .",
"Specifically, we first apply open information extraction (Stanovsky et al., 2018) to extract (cid:104) subject, relation, object (cid:105) triples from context sentences.",
"Each triple is then transformed into two nodes connected with a directed edge, like A Perfect Murder is a 1998 American crime film in Fig. 2. The two nodes respectively represent the subject and object, and the edge describes their relation.",
"Coreference resolution (Lee et al., 2017) is applied to merge nodes referring to the same entity.",
"For instance, A Perfect Murder is merged with It in Fig. 2. Reasoning Chain Selection With the context graph constructed, we sample a connected subgraph GT consisting of d + 1 nodes from it to serve as the reasoning chain of the generated question.",
"A node N 0 is first sampled as the answer of the question, if it is, or linked with, a named entity that has more than one node degree.",
"Next, we extract from GCG a maximum spanning tree GL , with N 0 as its root node, e.g., the tree structure shown in Fig. 1. GCG is temporarily considered as an undirected graph at this step.",
"We then prune GL into GT to keep only d + 1 nodes.",
"During pruning, we consider the sentence position where each node is extracted in order to make the reasoning chain relevant to more context.",
"In the following, we will denote a node in GT as N i ( i = 0 , 1 , . . . , d ) , where each node is subscripted by preorder traversal of GT , and NP ( i ) as the parent of N i .",
"Step-by-step Question Generation Our step-by-step QG process is described at lines 5-11 in Algorithm 1. The following notations are defined for clearer illustration: Algorithm 1 Procedure of Our DCQG Framework Input: context C , difficulty level d Output: ( Q , A ) 1: GCG BuildCG ( C ) 2: N 0 SampleAnswerNode ( GCG ) 3: GL MaxTree ( GCG , N 0 ) 4: GT Prune ( GL , d ) 5: for N i in PreorderTraversal ( GT ) do 6: if i = 0 then continue 7: NP ( i ) = Parent ( N i ) 8: S i = ContextSentence ( C , N i , NP ( i ) ) 9: R i (cid:40) Bridge if N i = FirstChild ( NP ( i ) ) Intersection else 10: Q i (cid:40) QG Initial ( N i , NP ( i ) , S i ) if i = 1 QG Rewrite ( Q i 1 , N i , NP ( i ) , S i , R i ) else 11: end for 12: return ( Q d , N 0 ) Q i ( i = 1 , 2 , . . . , d ) represents the question generated at each step, where Q d is the final question Q , and Q i +1 is rewritten from Q i by adding one more hop of reasoning.",
"S i represents the context sentence from which we extract the triple N i NP ( i ) .",
"R i is the rewriting type of Q i ( i = 2 , 3 , . . . , d ) .",
"Specifically, we consider two types of rewriting patterns in this work: Bridge and Intersection .",
"As shown in Fig. 1, Bridge -style rewriting replaces an entity with a modified clause, while Intersection adds another restriction to an existing entity in the question.",
"These two types can be distinguished by whether N i is the first child of its parent node, i.e., whether its parent node has already been rewritten once in Bridge style.",
"To generate the final question with the required difficulty level d , we first use a question generator QG Initial to generate an initial simple question based on N 1 , N 0 , and the corresponding context sentence S 1 .",
"Then, we repeatedly (for d 1 times) use QG Rewrite to rewrite question Q i 1 into a more complex one Q i , based on node N i and its parent node NP ( i ) , context sentence S i , and the rewriting type R i ( i = 2 , 3 , . . . , d ) .",
"Formally, the generation process of QG Initial and the rewriting process of QG Rewrite can be defined as: Q 1 = arg max Q 1 P ( Q 1 |N 1 , N 0 , S 1 ) Q i = arg max Q i P ( Q i |Q i 1 , N i , NP ( i ) , S i , R i ) where i = 2 , 3 , . . . , d .",
"Algorithm 2 Procedure of Data Construction Input: context C = {P 1 , P 2 } , QA pair ( Q 2 , A 2 ) , supporting facts F Output: R 1 , ( Q 1 , A 1 ) , S 1 , S 2 , {N 0 , E 1 , N 1 , E 2 , N 2 } 1: R 1 TypeClassify ( Q 2 ) 2: if R 1 / { Bridge , Intersection } then return 3: subq 1 , subq 2 DecompQ ( Q 2 ) 4: suba 1 , suba 2 QA ( subq 1 ) , QA ( subq 2 ) 5: Q 1 , A 1 (cid:40) subq 2 , suba 2 if A 2 = suba 2 subq 1 , suba 1 else 6: S 1 , S 2 (cid:40) F P 1 , F P 2 if Q 1 concerns P 1 F P 2 , F P 1 else 7: N 2 FindNode ( A 2 ) 8: N 0 , E 1 , N 1 , E 2 Match ( subq 1 , subq 2 ) In our implementation, both QG Initial and QG Rewrite are initialized with the pre-trained GPT2-small model (Radford et al., 2019), and then fine-tuned on our constructed dataset (see Sec. 4).",
"The encoder of QG Rewrite , as illustrated in Fig. 2, is similar to Liu et al. (2020).",
"If N i points to NP ( i ) , then the input sequence is organized in the form of (cid:104) bos (cid:105) S i (cid:104) nodeC (cid:105) N i (cid:104) edge (cid:105) E i (cid:104) nodeP (cid:105) NP ( i ) (cid:104) type (cid:105) R i (cid:104) subq (cid:105) Q i 1 (cid:104) eos (cid:105) , where E i is the edge from N i to NP ( i ) .",
"The positions of (cid:104) nodeC (cid:105) N i and (cid:104) nodeP (cid:105) NP ( i ) will be exchanged if NP ( i ) points to N i .",
"As for QG Initial , its input is organized in the same way except without (cid:104) type (cid:105) R i (cid:104) subq (cid:105) Q i 1 .",
"The segment embedding layer is utilized to identify different segments.",
"For those parts in S i and Q i 1 that are the same as, or refer to the same entity as NP ( i ) , we replace their segment embed-dings with the one of NP ( i ) , considering that the parent node of N i plays an important role in denot-ing what to ask about, or which part to rewrite, as shown in Fig. 1. 4 Automatic Dataset Construction Manually constructing a new dataset for our task is difficult and costly.",
"Instead, we propose to automatically build a dataset from existing QA datasets without extra human annotation.",
"In our work, the training data is constructed from HotpotQA (Yang et al., 2018), in which every context C consists of two paragraphs {P 1 , P 2 } , and most of the questions require two hops of reasoning, each concerning one paragraph .",
"HotpotQA also annotates supporting facts F , which are the part of the context most relevant to the question.",
"In addition to the information already available in HotpotQA, we also need the following information to train QG Initial and QG Rewrite :",
"i) ( Q 1 , A 1 ) , the simple initial question and its answer, which are used to train QG Initial ;",
"ii) R 2 , the type of rewriting from Q 1 to Q 2 ;",
"iii) {N 0 , N 1 , N 2 } , the reasoning chain of Q 2 ; and",
"iv) S i ( i = 1 , 2) , the context sentences where we extract N 0 , N 1 and N 2 .",
"Algorithm 2 describes our procedure to obtain the above information.",
"The construction process is facilitated with the help of a reasoning type classifier ( TypeClassify ) and a question decomposer ( DecompQ ), referring to Min et al. (2019).",
"For each question in HotpotQA (i.e. Q 2 ), we first distinguish its reasoning type, and filter out those that are not Bridge and Intersection .",
"The reasoning type here corresponds to the rewriting type R i .",
"Then, DecompQ decomposes Q 2 into two sub-questions, subq 1 and subq 2 , based on span prediction and linguistic rules.",
"For example, the Q 2 in Fig. 2 will be decomposed into subq 1 = To which film A Perfect Murder was a modern remake? , and subq 2 = Who directed Dial M for Murder? .",
"After that, an off-the-shelf single-hop QA model (Min et al., 2019) is utilized to acquire the answer of the two sub-questions, which should be Dial M for Murder and Alfred Hitchcock in the example.",
"As for Q 1 , it is one of the sub-questions.",
"When Q 2 is of the Intersection type, Q 1 can be either subq 1 or subq 2 .",
"For the Bridge type, it is the subquestion that shares the same answer as A 2 .",
"For the example above, Q 1 is subq 2 because suba 2 = A 2 .",
"The context sentence S i is supposed to provide supporting facts contained in the paragraph F that concerns Q i ( i = 1 , 2) .",
"For the reasoning chain, it is selected from the local context graph by first locating N 2 and then finding N 0 , N 1 through text matching with the two sub-questions.",
"In the following experiments, we mainly evaluate the generation results of our proposed method when required to produce 1-hop and 2-hop questions, denoted as Ours 1 hop and Ours 2 hop .",
"In Sec. 5.2, we compare our method with a set of strong baselines using both automatic and human evaluations on question quality.",
"In Sec. 5.3, we provide controllability analysis by manually evaluating their difficulty levels and testing the performance of QA systems in answering questions generated by different methods.",
"In Sec. 5.4, we test the effect of our generated QA pairs on the performance of a multi-hop QA model in a data augmentation setting.",
"In Sec. 5.5, we further analyze the extensibility of our method, i.e., its potential in generating questions that require reasoning of more than two hops.",
"Our code and constructed dataset have been made publicly available to facilitate future research.",
"1 5.1 Experimental Setup Datasets The constructed dataset described in Sec. 4 consists of 57,397/6,072/6,072 samples for training/validation/test.",
"For context graph construction, we use the coreference resolution toolkit from AllenNLP 1.0.0 (Lee et al., 2017) and the open information extraction toolkit provided by the Plasticity developer API.",
"2 The question decomposer and the reasoning type classifier follow the implementations of Min et al. (2019).",
"generate the 2-hop questions in the datasets: NQG++ (Zhou et al., 2017) is a seq2seq model based on bi-directional Gate Recurrent Unit (GRU), with features enriched by answer position and lexical information.",
"ASs2s (Kim et al., 2019) is a seq2seq model based on Long Short-term Memory (LSTM), which separately encodes answer and context.",
"SRL-Graph and DP-Graph (Pan et al., 2020) are two state-of-the-art QG systems.",
"They encode graph-level and document-level information with an attention-based Graph Neural Network (GNN) and a bi-directional GRU, respectively.",
"SRL-Graph constructs the semantic graph by semantic role labelling, and DP-Graph by dependency parsing.",
"GPT2 is a vanilla GPT2-based QG model.",
"Its input is the concatenation of context and sampled answer.",
"The position where the answer appears in the context segment is denoted in the segment embedding layer.",
"Implementation Details The baseline models are trained to directly produce the 2-hop questions, while QG Initial and QG Rewrite are respectively trained to generate 1-hop questions and rewrite 1-hop ones into 2-hop.",
"QG Initial , QG Rewrite , and GPT2 are initialized with the GPT2-small model from the HuggingFace Transformer library (Wolf et al., 2019), and fine-tuned for 8, 10, and 7 epochs, respectively, with batch size of 16.",
"We apply topp nucleus sampling with p = 0.9 during decoding.",
"AdamW (Loshchilov and Hutter, 2017) is used as optimizer, with the initial learning rate set to be 6 .",
"25 10 5 and adaptively decays during training.",
"For DP-Graph, we use their released model and code to perform the experiment.",
"For the other three baselines, we directly refer to the experiment results reported in Pan et al. (2020).",
"The performances of these baselines are compared under the same setting as in Pan et al. (2020), where each context is abbreviated to only include the supporting facts and the part that overlaps with the question.",
"More implementation details can be found in our code and the supplementary materials.",
"Automatic Evaluation The automatic evaluation metrics are BLEU3, BLEU4 (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007), and CIDEr (Vedantam et al., 2015), which measure the similarity between the generation results and the reference questions in terms of n -grams.",
"As the four baselines are trained to generate 2-hop questions only, we only compare them with Ours 2 hop .",
"As shown in Table 1, we can see that Ours 2 hop and GPT2 perform consistently better than the others.",
"Though the performances of Ours 2 hop and GPT2 are close in terms of automatic metrics, we observe that the questions generated by Ours 2 hop are usually more well-formed, concise and answerable, as illustrated in Table 2. These advantages cannot be reflected through automatic evaluation.",
"Human Evaluation We randomly sample 200 questions respectively from DP-Graph, GPT2, Ours 1 hop , Ours 2 hop , as well as the reference 1-hop and 2-hop questions in the constructed dataset (Gold 1 hop , Gold 2 hop ).",
"The questions are manually evaluated by eight human annotators, who are graduate students, majoring in English Literature, Computer Science, or Electronic Engineering.",
"They voluntarily offer to help without being com-Ours 2 hop GPT2 When was the first theatre director of African descent born?",
"Before annotation, they are informed of the detailed annotation instruction with clear scoring examples.",
"The generated questions are evaluated in the following four dimensions: Well-formed : It checks whether a question is semantically correct.",
"Annotators are asked to mark a question as yes , acceptable , or no .",
"Acceptable is selected if the question is not grammatically correct, but its meaning is still inferrable.",
"Concise : It checks whether the QG models are overfitted, generating questions with redundant modifiers.",
"The question is marked as yes if no single word can be deleted, acceptable if it is a little lengthy but still in a natural way, and no if it is abnormally verbose.",
"Answerable : It checks whether a question is answerable according to the given context.",
"The anonnotion is either yes or no .",
"Answer Matching : It checks whether the given answer is the correct answer to the question.",
"The anonnotion is either yes or no .",
"The results are shown in Table 3. Overall, we can see that Ours 2 hop performs consistently better than DP-Graph and GPT2 across all metrics and comparable to the hand-crafted reference questions.",
"Our method performs especially well in terms of concise , even better than the reference questions.",
"For reference, the average word number of the questions generated by DP-Graph, GPT2, Ours 2 hop , and Gold 2 hop are 19.32, 19.26, 17.18, 17.44, respectively.",
"It demonstrates that the enriched graph information and our multi-stage rewriting mechanism indeed enhance the question structure and content.",
"In comparison, we find that the questions generated by the two baselines tend to unreasonably pile too many modifiers and subordinate clauses.",
"As for the 1-hop questions, Ours 1 hop performs well in terms of answerable and answer matching , but not so competitive in terms of well-formed , mainly due to the limitation of its training data.",
"As the 1-hop reference questions (Gold 1 hop ) are automatically decomposed from the hand-crafted 2-hop questions, a significant portion (44%) of them have some grammatical errors, but most of them are still understandable despite that.",
"Human Evaluation of Controllability For controllability analysis, we manually evaluate the num-bers of inference steps involved in generated questions.",
"DP-Graph and GPT2 are also evaluated for comparison.",
"The results are shown in Table 4. 70.65% of Ours 1 hop require one step of inference and 67.74% of Ours 2 hop require two steps, proving that our framework can successfully control the number of inference steps of most generated questions.",
"In comparison, DP-Graph and GPT2 are not difficulty-aware and their generated questions are more scattered in difficulty levels.",
"Difficulty Assessment with QA Systems For further assessment of question difficulty, we test the performance of QA models in answering questions generated by different models.",
"Specifically, we utilize two off-the-shelf QA models provided by the HuggingFace Transformer library (Wolf et al., 2019), which are respectively initialized with Test Set BERT RoBERTa EM F1 EM F1 DP-Graph 0.436 0.615 0.552 0.678 GPT2 0.419 0.581 0.669 0.772 Ours 2 hop 0.295 0.381 0.506 0.663 Ours 1 hop 0.618 0.737 0.882 0.937 Table 5: Performance of BERTand RoBERTa-based QA models on different generated QA datasets.",
"BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019b), and then fine-tuned on SQuAD (Ra-jpurkar et al., 2016).",
"We select those generated questions that are ensured to be paired with correct answers by the human evaluation described in Sec. 5.2, and test the performance of two QA models in answering them.",
"The evaluation metrics include Exact Match (EM) and F1.",
"The results are shown in Table 5. We can see that questions generated by Ours 2 hop are more difficult than Ours 1 hop not only to humans (requiring more hops of reasoning), but also to the state-of-the-art QA models.",
"In comparison, with a more scattered mix of 1-hop and 2-hop questions, the performances on DP-Graph and GPT2 are between Ours 1 hop and Ours 2 hop .",
"This result demonstrates that our method can controllably generate questions of different difficulty levels for QA systems and that inference steps can effectively model the question difficulty.",
"We further evaluate whether the generated QA pairs can boost QA performance through data augmentation.",
"Specifically, we heuristically sample the answers and reasoning chains from the context graphs in our constructed dataset to generate 150,305 two-hop questions.",
"As a comparison, we utilize GPT2 to generate the same amount of data with the same sampled answers and contextual sentences.",
"Some low-quality questions are filtered out if their word 0.0 2.5 5.0 7.5 10 \u00001\u0000X\u0000P\u0000E\u0000H\u0000U\u0000\u0003\u0000R\u0000I\u0000\u0003\u0000$\u0000X\u0000J\u0000P\u0000H\u0000Q\u0000W\u0000H\u0000G\u0000\u0003\u00006\u0000D\u0000P\u0000S\u0000O\u0000H\u0000V\u0000\u0003\u0000\u0003\u0000\u000b\u0000L\u0000Q\u0000\u0003\u0000W\u0000K\u0000R\u0000X\u0000V\u0000D\u0000Q\u0000G\u0000V\u0000\f 0.66 0.68 0.70 0.72 0.74 0.76 0.78 0.80 \u0000( \u00000 100%HotpotQA + Ours 100%HotpotQA + GPT2 25%HotpotQA + Ours 25%HotpotQA + GPT2 0.0 2.5 5.0 7.5 10 \u00001\u0000X\u0000P\u0000E\u0000H\u0000U\u0000\u0003\u0000R\u0000I\u0000\u0003\u0000$\u0000X\u0000J\u0000P\u0000H\u0000Q\u0000W\u0000H\u0000G\u0000\u0003\u00006\u0000D\u0000P\u0000S\u0000O\u0000H\u0000V\u0000\u0003\u0000\u0003\u0000\u000b\u0000L\u0000Q\u0000\u0003\u0000W\u0000K\u0000R\u0000X\u0000V\u0000D\u0000Q\u0000G\u0000V\u0000\f 0.74 0.76 0.78 0.80 0.82 0.84 0.86 0.88 \u0000) \u0000\u0014 100%HotpotQA + Ours 100%HotpotQA + GPT2 25%HotpotQA + Ours 25%HotpotQA + GPT2 Figure 3: Performance of the DistilBERT-based QA system on HotpotQA, augmented with different quantities of generated data.",
"counts are not between 6 30 (4.7% for ours and 9.2% for GPT2), or the answers directly appear in the questions (2.7% for ours and 2.4% for GPT2).",
"Finally, we randomly sample 100,000 QA pairs and augment the HotpotQA dataset with them.",
"Context Reasoning QG Process Hollywood Arms is a play by Carrie Hamilton and Carol Burnett.",
"It ran at the Goodman Theatre and on Broadway in 2002 Q : What was run at the Goodman Theatre in 2002?",
"A DistilBERT-based (Sanh et al., 2019) QA model is implemented.",
"It takes as input the concatenation of context and question to predict the answer span.",
"To speed up the experiment, we only consider those necessary supporting facts as the question answering context.",
"During training, the original samples from HotpotQA are oversampled to ensure that they are at least 4 times as the generated data.",
"We use Adam (Kingma and Ba, 2015) as the optimizer, with the mini-batch size of 32.",
"The learning rate is initially set to 3 10 5 and adaptively decays during training.",
"The configurations are the same in all the QA experiments, except that the training datasets are different combinations of HotpotQA and the generated data.",
"The validation and test sets are the same as those in HotpotQA.",
"Q : What play by Carrie Hamilton was run at the Goodman Theatre in 2002?",
"We test the impact of the generated data under both high-resource (using the whole training set of HotpotQA) and low-resource settings (us-ing only 25% of the data randomly sampled from HotpotQA).",
"Fig. 3 compares the QA performance, augmented with different quantities of the data generated by our method and by GPT2, respectively.",
"We can see that under both settings, our method achieves better performance than GPT2.",
"Under the low-resource setting, performance boost achieved by our generated data is more significant and obviously better than that of GPT2.",
"The performance of the QA model steadily improves when the training dataset is augmented with more data.",
"EM and F1 of the QA model are improved by 2.56% and 1.69%, respectively, when 100,000 samples of our generated data are utilized.",
"To analyze the extensibility of our method, we experiment with the generation of questions that are more than 2-hop, by repeatedly using QG Rewrite to increase question difficulty.",
"Fig. 4 shows two examples of 3-hop question generation process.",
"The two intermediate questions and the corresponding reasoning chains are also listed for reference.",
"We can see that the intermediate questions, serving as springboards, are effectively used by QG Rewrite to generate more complex questions.",
"With the training data that only contains 1-hop and 2-hop questions, our framework is able to generate some high-quality 3-hop questions, demonstrating the extensibility of our framework.",
"It can be expected that the performance of our model can be further strengthened if a small training set of 3-hop question data is available.",
"Besides, it can also be observed that though the contexts and answers of these two questions are the same, two different questions with different underlying logic are generated, illustrating that the extracted reasoning chain effectively controls the question content.",
"However, when generating questions with more than 3 hops, we find that the question quality drastically declines.",
"The semantic errors become more popular, and some content tend to be unreasonably repeated.",
"It is probably because the input of QG Rewrite has become too long to be precisely encoded by the GPT2-small model due to the growing length of the question.",
"It will be our future work to explore how to effectively extend our method to more-hop question generation.",
"We explored the task of difficulty-controllable question generation, with question difficulty redefined as the inference steps required to answer it.",
"A step-by-step generation framework was proposed to accomplish this objective, with an input sampler to extract the reasoning chain, a question generator to produce a simple question, and a question rewriter to further adapt it into a more complex one.",
"A dataset was automatically constructed based on HotpotQA to facilitate the research.",
"Extensive evaluations demonstrated that our method can effectively control difficulty of the generated questions, and keep high question quality at the same time.",
"Thanks to Zijing Ou, Yafei Liu and Suyuchen Wang for their helpful comments on this paper."
] | [
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Standard architectures used in instruction following often struggle on novel compositions of subgoals (e.g. navigating to landmarks or picking up objects) observed during training.",
"We propose a modular architecture for following natural language instructions that describe sequences of diverse subgoals.",
"In our approach, subgoal modules each carry out natural language instructions for a specific subgoal type.",
"A sequence of modules to execute is chosen by learning to segment the instructions and predicting a subgoal type for each segment.",
"When compared to standard, non-modular sequence-to-sequence approaches on ALFRED (Shridhar et al., 2020), a challenging instruction following benchmark, we find that modularization improves generalization to novel subgoal compositions, as well as to environments unseen in training.",
"Work on grounded instruction following (MacMa-hon et al., 2006; Vogel and Jurafsky, 2010; Tellex et al., 2011; Chen and Mooney, 2011; Artzi and Zettlemoyer, 2013) has recently been driven by sequence-to-sequence models (Mei et al., 2016; Hermann et al., 2017), which allow end-to-end grounding of linguistically-rich instructions into equally-rich visual contexts (Misra et al., 2018; Anderson et al., 2018; Chen et al., 2019).",
"These sequence-to-sequence models are monolithic : they consist of a single network structure which is applied identically to every example in the dataset.",
"Monolithic instruction following models typically perform well when evaluated on test data from the same distribution seen during training.",
"However, they often struggle in compositional generalization : composing atomic parts, such as actions or goals, where the parts are seen in training but their compositions are not (Lake and Baroni, 2018; Ruis et al., 2020; Hill et al., 2020).",
"In this work, we improve compositional generalization in instruction following with modular networks , which have been successful in non-embodied language grounding tasks (Andreas et al., 2016; Hu et al., 2017; Cirik et al., 2018; Yu et al., 2018; Mao et al., 2019; Han et al., 2019) and in following synthetic instructions or symbolic policy descriptions (Andreas et al., 2017; Oh et al., 2017; Das et al., 2018).",
"Modular networks split the decision making process into a set of neural modules.",
"Modules are each specialized for some function, composed into a structure specific to each example, and trained jointly to complete the task.",
"because of their composable structure (Devin et al., 2017; Andreas et al., 2017; Bahdanau et al., 2019; Purushwalkam et al., 2019), and that they can generalize to new environments or domains through module specialization (Hu et al., 2019; Blukis et al., 2020).",
"However, all this work has either focused on grounding tasks without a temporal component or used a network structure which is not predicted from language.",
"We propose a modular architecture for embodied vision-and-language instruction following 1 , and find that this architecture improves generalization on unseen compositions of subgoals (such as navigation, picking up objects, cleaning them, etc.).",
"We define separate sequence-to-sequence modules per type of subgoal.",
"These modules are strung together to execute complex high-level tasks.",
"We train a controller to predict a sequence of subgoal types from language instructions, which determines the order in which to execute the modules.",
"We evaluate models on the ALFRED dataset (Shridhar et al., 2020), an instruction-following benchmark containing a diverse set of household tasks.",
"We focus on compositional generalization: carrying out instructions describing novel high-level tasks, containing novel compositions of actions (see Figure 1 for an example).",
"We find that our modular model improves performance on average across subgoal types when compared to a standard, monolithic sequence-to-sequence architecture.",
"Additionally, we find improved generalization to environments not seen in training.",
"We focus on following instructions in embodied tasks involving navigation and complex object interactions, as shown in Figure 2.",
"In training, each set of full instructions ( e.g. Turn right and cross the room ... Place the vase on the coffee table to the left of the computer.) is paired with a demonstration of image observations and actions.",
"In training, we further assume that the full instruction is segmented into subgoal instructions , and each subgoal instruction is labeled with one of a small number (in our work, 8) of subgoal types , e.g. [Walk to the coffee maker.: GOTO ], [Pick up the dirty mug...: PICKUP ], . . . , and paired with the corresponding segment of the demonstration.",
"During evaluation, the agent is given only full instructions (which are unsegmented and unlabeled), and must predict a sequence of actions to carry out the instructions, conditioning on the image observations it receives.",
"Our modular architecture for compositional instruction following consists of a high-level controller (Figure 2, left), and modules for each subgoal type (Figure 2, right).",
"The high-level controller chooses modules to execute in sequence based on the natural language instructions, and each chosen module executes until it outputs a STOP action.",
"The modules all share the same sequence-to-sequence architecture, which is the same as the monolithic architecture.",
"We initialize each module's parameters with parameters from the monolithic model, and then fine-tune the parameters of each module to specialize for its subgoal.",
"Our instruction-based controller is trained to segment a full instruction into sub-instructions and predict the subgoal type for each sub-instruction.",
"We use a linear chain CRF (Lafferty et al., 2001) that conditions on a bidirectional-LSTM encoding of the full instruction and predicts tags for each word, which determine the segmentation and sequence of subgoal types.",
"This model is based on standard neural segmentation and labelling models (Huang et al., 2015; Lample et al., 2016).",
"We train the controller on the ground-truth instruction segmentations and subgoal sequence labels, and in evaluation use the model to predict segmentations and their associated subgoal sequences (Figure 2, top left).",
"This predicted sequence of subgoals determines the order to execute the modules (Figure 2, right).",
"We use a BIO chunking scheme to jointly segment the instruction and predict a subgoal label for each segment.",
"Formally, for a full instruction of length N , the controller defines a distribution over subgoal tags s 1: N for each word given the instruction x 1: N as p ( s 1: N | x 1: N ) exp N (cid:88) n =1 (cid:0) U s n + B s n 1 ,s n (cid:1) The subgoal tag scores U s n for word n are given by a linear projection of bidirectional LSTM features for the word at position n .",
"The tag transition scores B s n 1 ,s n are learned scalar parameters.",
"In training, we supervise s 1: N using the segmentation of the instruction x 1: N into K subgoal instructions and the subgoal label for each instruction.",
"To predict subgoals for a full instruction in evaluation, we obtain arg max s 1: N p ( s 1: N | x 1: N ) using Viterbi decoding, which provides a segmentation into sub-instructions and a subgoal label for each sub-instruction.",
"Our modularized architecture may be seen in Figure 2, right.",
"The architecture consists of 8 independent modules, one for each of the 8 subgoals in the domain ( e.g. GOTO , PICKUP ).",
"For each module, we use the same architecture as Shridhar et al. (2020)'s monolithic model.",
"This is a sequence-to-sequence model composed of an LSTM decoder taking as input an attended embedding of the natural language instruction, pretrained ResNet-18 (He et al., 2016) features of the image observations, and the previous action's embedding.",
"Hidden states are passed between the modules' LSTM decoders at subgoal transitions (Figure 2, right).",
"At each time step, each module M i computes its hidden state based on the last time step's action a t 1 , the current time step's observed image features o t , an attended language embedding x it , and the previous hidden state h it 1 : e it = [ a t 1 ; o t ; x it ] h it = LSTM i ( e it , h it 1 ) Each module's attended language embedding x it is produced using its own attention mechanism over embeddings X = x 1: N of the language instruction, which are produced by a bidirectional LSTM encoder: z it = ( W ix h it 1 ) (cid:62) X it = Softmax ( z it ) x it = ( it ) (cid:62) X Finally, the action a t and object interaction mask m t are predicted from h it and e it with a linear layer and a deconvolution network respectively.",
"More details about this architecture can be found in Shridhar et al. (2020).",
"Both the action and mask decoders, well as the language encoder, are shared across modules.",
"2 Our use of subgoal modules is similar to the hierarchical policy approaches of Andreas et al. (2017), Oh et al. (2017), and Das et al. (2018).",
"However, in those approaches, the input to each module is symbolic ( e.g. FIND [ KITCHEN ]).",
"In contrast, all modules in our work condition directly on natural language.",
"We first pre-train the monolithic model by maximizing the likelihood of the ground-truth trajectories in the training data (Shridhar et al., 2020).",
"We train for up to 20 epochs using the Adam optimizer (Kingma and Ba, 2014) with early stopping on validation data (see Appendix A.1 for hyperparam-eters).",
"We use this monolithic model to initialize the parameters of each of the modules, which have identical architecture to the monolithic model, and 2 The modules' instruction encoder is separate from the controller's encoder (Sec. 2.1), as we found it possible to achieve high performance on the subgoal prediction task using a smaller encoder than the one used by the modules.",
"fine-tune them using the same training and early stopping procedure on the same validation data, 3 allowing the monolithic model's parameters to specialize for each module.",
"Each module predicts only the actions for its segment of each trajectory; however, modules are jointly fine-tuned, passing hidden states (and gradients) from module to module.",
"We evaluate models on out-of-domain generalization in two conditions (see below) using the ALFRED benchmark (Shridhar et al., 2020), comparing our modular approach to their non-modular sequence-to-sequence model.",
"ALFRED is implemented in AI2-THOR 2.0 (Kolve et al., 2017), which contains a set of simulated environments with realistic indoor scene renderings and object interactions.",
"The dataset contains approximately 25K expert instruction-trajectory pairs, comprised of 3 instructions for each of 8K unique trajectories.",
"The instructions include both a high level instruction and a sequence of low level instructions.",
"In our experiments, we do not use the high level instructions, which Shridhar et al. (2020) found to produce comparable results when evaluated on generalization to unseen environments with these architectures.",
"Figure 1 shows two example trajectories and their associated instructions.",
"Trajectories are composed (see Sec. 2) of sequences of eight different types of subgoals: navigation (GOTO ) and a variety of object interactions ( e.g. PICKUP , CLEAN , HEAT ).",
"Each subgoal's subtrajectory is composed of a sequence of low-level discrete actions which specify commands for navigation or object interactions (which are accompanied by image segmentations to choose the object to interact with).",
"The ALFRED dataset was constructed to test generalization to novel instructions and unseen environments.",
"However, all evaluation trajectories in the dataset correspond to sequences of subgoals that are seen during training.",
"For example, some training and evaluation instances might both correspond to the underlying subgoal sequence GOTO , PICKUP , GOTO , PUT , but differ in their low-level actions, their language descriptions, and possibly also the environments they are carried out in.",
"Novel Tasks.",
"We evaluate models' ability to generalize to different high-level tasks (compositions of subgoals) than seen in training.",
"The dataset contains seven different task types, such as Pick & Place , as described in Appendix B.1.",
"We hold out two task types and evaluate models on their ability to generalize to them: Pick Two & Place and Stack & Place .",
"These tasks are chosen because they contain subgoal types that are all individually seen in training, but typically in different sequences.",
"We create generalization splits pick-2-seen and pick-2-unseen by filtering the seen and unseen splits below to contain only Pick Two & Place tasks, and remove all Pick Two & Place tasks from the training data.",
"We create splits stack-seen and stack-unseen for Stack & Place similarly.",
"Novel Instructions and Environments This is the standard condition defined in the original ALFRED dataset.",
"There are two held-out validation sets: seen , which tests generalization to novel instructions and trajectories but through environments seen during training, and unseen , which tests generalization to novel environments: rooms with new layouts, object appearances, and furnishings.",
"We compare our modular architecture with the monolithic baseline, averaging performance over models trained from 3 random seeds.",
"For each generalization condition, we measure success rates over full trajectories as well as over each subgoal type independently.",
"Due to the challenging nature of the domain, subgoal evaluation provides finer-grained comparisons than full trajectories.",
"We use the same evaluation methods and metrics as in Shridhar et al. (2020).",
"Success rates are weighted by path lengths to penalize successful trajectories which are longer than the ground-truth demonstration trajectory.",
"To evaluate full trajectories, we measure path completion: the portion of subgoals completed within the full trajectories.",
"To evaluate the subgoals independently, we advance the model along the expert trajectory up until the point where a given subgoal begins (to maintain a history of actions and observations), then require the model to carry out the subgoal from that point.",
"We also report results from Shridhar et al. (2020) and Singh et al. (2020).",
"We note that the approach of Singh et al. (2020) obtains higher performance on full trajectories than the system of Shridhar et al. (2020) (which we base our approach on) Model C l ea n C oo l G o t o H ea t P i c kup P u t S li ce T ogg l e A vg .",
"primarily by introducing a modular object interaction architecture (shared across all subgoals) and a pre-trained object segmentation model.",
"These techniques could also be incorporated into our approach, which uses modular components for individual subgoal types.",
"Novel Tasks.",
"Table 1 shows for each split the success rates on subgoals appearing in at least 50 validation examples.",
"The modular outperforms the monolithic model on both seen and unseen splits (Tables 1b and 1c).",
"Full trajectory results for novel task generalization are shown in Table 2.",
"In the double generalization condition (unseen environments for the held-out pick-2 and stack tasks) on full trajectories, neither model completes subgoals successfully.",
"Overall, we find that modularity helps across most generalization conditions.",
"Generalization to novel environments.",
"We also compare models on generalization to unseen environments.",
"In the independent subgoal evaluation, the monolithic and modular models perform equally on average in the standard-seen split (Ta-Standard Standard Pick-2 Stack Model seen unseen seen seen S+ 9.4 (5.7) 7.4 (4.7) MOCA 28.5 (22.3) 13.4 (8.3) Mono.",
"ble 1a, top).",
"However, in the standard-unseen split (Table 1a, bottom), our modular model outperforms the baseline substantially, with an average success rate of 57% compared to the monolithic model's 46% .",
"(On subgoal types not shown, the modular model still outperforms the monolithic, by margins up to 16%.)",
"In the full trajectory results (Table 2) we see comparable performance between the monolithic and modular models on unseen environments.",
"We introduced a novel modular architecture for grounded instruction following where each module is a sequence-to-sequence model conditioned on natural language instructions.",
"With the ALFRED dataset as a testbed, we showed that our modular model achieves better out-of-domain generalization, generalizing better at the subgoal level to novel task compositions and unseen environments than the monolithic model used in prior work.",
"All of the module types in our model currently use separate parameterizations but identical architectures; future work might leverage the modularity of our approach by using specialized architectures, training procedures, or loss functions for each subgoal type.",
"Furthermore, unsupervised methods for jointly segmenting instructions and trajectories without requiring labeled subgoal labels and alignments would be a valuable addition to our framework.",
"This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No.",
"DGE 1752814, a Ford Foundation fellowship to the first author, a Google PhD fellowship to the second author, and by DARPA through the XAI program and the LwLL program."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"method",
"objective",
"result",
"result",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"method",
"abstain",
"other",
"other"
] |
[
"Incremental domain adaptation, in which a system learns from the correct output for each input immediately after making its prediction for that input, can dramatically improve system performance for interactive machine translation.",
"Users of interactive systems are sensitive to the speed of adaptation and how often a system repeats mistakes, despite being corrected.",
"Adaptation is most commonly assessed using corpus-level BLEUor TER-derived metrics that do not explicitly take adaptation speed into account.",
"We find that these metrics often do not capture immediate adaptation effects, such as zero-shot and one-shot learning of domain-specific lexical items.",
"To this end, we propose new metrics that directly evaluate immediate adaptation performance for machine translation.",
"We use these metrics to choose the most suitable adaptation method from a range of different adaptation techniques for neural machine translation systems.",
"Incremental domain adaptation, or online adaptation, has been shown to improve statistical machine translation and especially neural machine translation (NMT) systems significantly (Turchi et al., 2017; Karimova et al., 2018) ( inter-alia ).",
"The natural use case is a computer-aided translation (CAT) scenario, where a user and a machine translation system collaborate to translate a document.",
"Each user translation is immediately used as a new training example to adapt the machine translation system to the specific document.",
"Adaptation techniques for MT are typically evaluated by their corpus translation quality, but such evaluations may not capture prominent aspects of the user experience in a collaborative translation scenario.",
"This paper focuses on directly measuring the speed of lexical acquisition for in-domain vocabulary.",
"To that end, we propose three related metrics that are designed to reflect the responsiveness of adaptation.",
"An ideal system would immediately acquire in-domain lexical items upon observing their translations.",
"Moreover, one might expect a neural system to generalize from one corrected translation to related terms.",
"Once a user translates bank to German Bank ( institution ) instead of Ufer ( shore ) in a document, the system should also correctly translate banks to Banken instead of Ufer (the plural is identical to the singular in German) in future sentences.",
"We measure both one-shot vocabulary acquisition for terms that have appeared once in a previous target sentence, as well as zero-shot vocabulary acquisition for terms that have not previously appeared.",
"Our experimental evaluation shows some surprising results.",
"Methods that appear to have comparable performance using corpus quality metrics such as BLEU can differ substantially in zero-shot and one-shot vocabulary acquisition.",
"In addition, we find that fine-tuning a neural model tends to improve one-shot vocabulary recall while degrading zero-shot vocabulary recall.",
"We evaluate several adaptation techniques on a range of online adaptation datasets.",
"Fine tuning applied to all parameters in the NMT model maximizes one-shot acquisition, but shows a worrisome degradation in zero-shot recall.",
"By contrast, fine tuning with group lasso regularization, a technique recently proposed to improve the space effi-ciency of adapted models (Wuebker et al., 2018), achieves an appealing balance of zero-shot and one-shot vocabulary acquisition as well as high corpus-level translation quality.",
"For interactive, adaptive machine translation systems, perceived adaptation performance is a crucial property: An error in the machine translation output which needs to be corrected multiple times can cause frustration, and thus may compromise acceptance of the MT system by human users.",
"A class of errors that are particularly salient are lexical choice errors for domain-specific lexical items.",
"In the extreme, NMT systems using subword modeling (Sennrich et al., 2015) can generate hallucinated wordswords that do not exist in the target languagewhich are especially irritating for users (Lee et al., 2018; Koehn and Knowles, 2017).",
"Users of adaptive MT have a reasonable expectation that in-domain vocabulary will be translated correctly after the translation of a term or some related term has been corrected manually.",
"Arguably, more subtle errors, referring to syntax, word order or more general semantics are less of a focus for immediate adaptation, as these types of errors are also harder to pinpoint and thus to evaluate 1 (Bentivogli et al., 2016).",
"Traditional metrics for evaluating machine translation outputs, e.g. BLEU and TER, in essence try to measure the similarity of a hypothesized translation to one or more reference translations, taking the full string into account.",
"Due to significant improvements in MT quality with neural models (Bentivogli et al., 2016) ( inter-alia ), more specialized metrics, evaluating certain desired behaviors of systems become more useful for specific tasks.",
"For example, Wuebker et al. (2016) show, that NMT models, while being better in most respects, still fall short in the handling of content words in comparison with phrase-based MT. This observation is also supported by Ben-tivogli et al. (2016), who show smaller gains for NMT for translation of nouns, an important category of content words.",
"Another reason to isolate vocabulary acquisition as an evaluation criterion is that interactive translation often employs local adaptation via prefix-decoding (Knowles and Koehn, 2016; Wuebker et al., 2016), which can allow the system to recover syntactic structure or resolve local am-1 Some practitioners observed that these subtle errors become harder to spot due the improved fluency of NMT systems (Burchardt, 2017).",
"biguities when given a prefix, but may still suffer from poor handling of unknown or domain-specific vocabulary.",
"In this work, we therefore focus on translation performance with respect to content words, setting word order and other aspects aside.",
"We propose three metrics: one to directly measure one-shot vocabulary acquisition, one to measure zero-shot vocabulary acquisition, and one to measure both.",
"In all three, we measure the recall of target-language content words so that the metrics can be computed automatically by comparing translation hypotheses to reference translations without the use of models or word alignments 2 .",
"We define content words as those words that are not included in a fixed stopword list, as used for example in query simplification for information retrieval.",
"Such lists are typically compiled manually and are available for many languages.",
"3 For western languages, content words are mostly nouns, main verbs, adjectives or adverbs.",
"For the i -th pair of source sentence and reference translation, i = 1 , . . . , |G| , of an ordered test corpus G , we define two sets R 0 ,i and R 1 ,i that are a subset of the whole set of unique 4 content words (i.e. types) of the reference translation for i .",
"R 0 ,i includes a word if its first occurrence in the test set is in the i -th reference of G , and R 1 ,i if its second occurrence in the test set is in the i -th reference of G .",
"The union R 0 ,i R 1 ,i includes content words occurring for either the first or second time.",
"To measure zero-shot adaptation in a given hypothesis H i , also represented as a set of its content words, we propose to evaluate the number of word types that were immediately translated correctly: R0 = |H i R 0 ,i | |R 0 ,i | .",
"ob-2 In each of the data sets considered in this work, the average number of occurrences of content words ranges between 1.01 and 1.11 per sentence.",
"We find this sufficiently close to 1 to evaluate in a bag-of-words fashion and not consider alignments.",
"3 For German we used the list available here: https://github.com/stopwords-iso .",
"4 All proposed metrics operate on the set-level, without clipping (Papineni et al., 2002) or alignment (Banerjee and Lavie, 2005; Kothur et al., 2018), as we have found this simplification effective.",
"serving it exactly once, we propose: R1 = |H i R 1 ,i | |R 1 ,i | .",
"This principle can be extended to define metrics R k , k > 1 to allow more slack in the adaptation, but we leave that investigation to future work.",
"Finally, we define a metric that measures both zeroand one-shot adaptation: R0+1 = |H i [ R 0 ,i R 1 ,i ] | |R 0 ,i R 1 ,i | .",
"All metrics can either be calculated for single sentences as described above, or for a full test corpus by summing over all sentences, e.g. for R0: (cid:80) |G| i =1 |H i R 0 ,i | (cid:80) |G| i =1 |R 0 ,i | .",
"Figure 1 gives an example calculation of all three metrics across a two-sentence corpus.",
"An important line of related work is concerned with estimating the potential adaptability of a system given a source text only, the so-called repetition rate (Cettolo et al., 2014).",
"The metric is inspired by BLEU, and uses a sliding window over the source text to count singleton N -grams.",
"The modus operandi for our metrics is most similar to HTER (Snover et al., 2006), since we are also assuming a single, targeted reference translation 5 for evaluation.",
"The introduction of NMT brought more aspects of translation quality evaluation into focus, such as discourse-level evaluation (Bawden et al., 2017), or very fine-grained evaluation of specific aspects of the translations (Bentivogli et al., 2016), highlighting the differences between phrase-based and NMT systems.",
"5 A reference translation which was produced from post-editing output of the to-be-evaluated MT system.",
"Online adaptation for (neural) machine translation has been thoroughly explored using BLEU (Turchi et al., 2017), simulated keystroke and mouse action ratio (Barrachina et al., 2009) for effort estimation (Peris and Casacuberta, 2018), word prediction accuracy (Wuebker et al., 2016), and user studies (Denkowski et al., 2014; Karimova et al., 2018) (all inter-alia ).",
"In (Simianer et al., 2016) immediate adaptation for hierarchical phrase-based MT is specifically investigated, but they also evaluate their systems using human-targeted BLEU and TER.",
"Regularization for segment-wise continued training in NMT has been explored by Khayrallah et al. (2018) by means of knowledge distillation, and with the group lasso by Wuebker et al. (2018), as used in this paper.",
"Most relevant to our work, in the context of document-level adaptation, Kothur et al. (2018) calculate accuracy for novel words based on an automatic word alignment.",
"However, they do not focus on zeroand one-shot matches, but instead aggregate counts over the full corpus.",
"NMT systems can be readily adapted by fine-tuning (also called continued training) with the same cross-entropy loss ( L ) as used for training the parameters of the baseline system, which also serves as the starting point for adaptation (Lu-ong and Manning, 2015).",
"Following Turchi et al. (2017), we perform learning from each example i using (stochastic) gradient descent, using the current source x i and reference translation y i as a batch of size 1: i i 1 L ( i 1 , x i , y i ) .",
"Evaluation is carried out using simulated post-editing (Hardt and Elming, 2010), first translating the source using the model with parameters i 1 , before performing the update described",
"above with the now revealed reference translation.",
"The machine translation system effectively only trains for a single iteration for any given data set.",
"The nave approach, updating all parameters of the NMT model, while being effective, can be infeasible in certain settings 6 , since tens of millions of parameters are updated depending on the respective model.",
"While some areas of a typical NMT model can be stored in a sparse fashion without loss (sourceand target embeddings), large parts of the model cannot.",
"We denote this type of adaptation as full .",
"A light-weight alternative to adaptation of the full parameter set is to introduce a second bias term in the final output layer of the NMT model, which is trained in isolation, freezing the rest of the model (Michel and Neubig, 2018).",
"This merely introduces a vector in the size of the output vocabulary.",
"This method is referred to as bias .",
"Another alternative is freezing parts of the model (Thompson et al., 2018), for example determining a subset of parameters by performance on a held-out set (Wuebker et al., 2018).",
"In our experiments we use two systems using this method, fixed and top , the former being a pre-determined fixed selection of parameters, and the latter being the topmost encoder and decoder layers in the Transformer NMT model (Vaswani et al., 2017).",
"Finally, a data-driven alternative to the fixed freezing method was introduced to NMT by Wuebker et al. (2018), implementing tensor-wise (cid:96) 1 /(cid:96) 2 group lasso regularization, allowing the learning procedure to select a fixed number of parameters after each update.",
"This setup is referred to as lasso .",
"We adapt an English German NMT system based on the Transformer architecture trained with an in-house NMT framework on about 100M bilingual sentence pairs.",
"The model has six layers in the encoder, three layers in the decoder, each with eight attention heads with dimensionality 256, distinct input and output embeddings, and vocabulary sizes of around 40,000.",
"The vocabularies are generated with byte-pair encoding (Sen-nrich et al., 2015).",
"For adaptation we use a learning rate of 10 2 (for the bias adaptation a learn-6 For example in setups where a large number of these adapted models need to be stored and transferred.",
"ing rate of 1.0 is used), no dropout, and no label-smoothing.",
"We use a tensor-wise (cid:96) 2 normalization to 1.0 for all gradients (gradient clipping).",
"Updates for a sentence pair are repeated until the perplexity on that sentence pair is 2 .",
"0 , for a maximum of three repetitions.",
"The fixed adaptation scheme, which involves selecting a subset of parameters on held-out data following Wuebker et al. (2018), uses about two million parameters excluding all embedding matrices, in addition to potentially the full source embeddings, but in practice this is limited to about 1M parameters.",
"The top scheme only adapts the top layers for both encoder and decoder.",
"For the lasso adaptation, we allow 1M parameters excluding the embeddings, for which we allow 1M parameters in total selected from all embedding matrices.",
"This scheme also always includes the previously described second bias term in the final output layer.",
"Since the proposed metrics operate on words, the machine translation outputs are first converted to full-form words using sentencepiece (Kudo and Richardson, 2018), then tokenized and truecased with the tokenizer and truecaser distributed with the Moses toolkit (Koehn et al., 2007).",
"Tables 1 and 2 show the performance of different adaptation techniques on the Autodesk dataset (Zhechev, 2012), a public post-editing software domain dataset for which incremental adaptation is known to provide large gains for corpus-level metrics.",
"BLEU, sentence BLEU, and TER scores (Table 1) are similar for full adaptation, sparse adaptation with group lasso , and adaptation of a fixed subset of parameters.",
"However (in Table 2), Method R0 R1 R0+1 baseline 39.3 44.9 41.0 bias 39.3 45.3 41.1 full 35.8 55.0 41.6 lasso 40.3 48.6 42.8 fixed 35.8 52.3 40.8 top 35.6 50.3 40.0 Table 2: Results on the Autodesk test set for the proposed metrics R0, R1, and R0+1.",
"lasso substantially outperforms the other methods in zero-shot (R0), and combined zeroand one-shot recall of content words (R0+1).",
"Zero-shot recall is considerably degraded relative to the non-adapted baseline for both full and adaptation of a fixed subset of tensors ( fixed and top ).",
"That is, terms never observed before during online training are translated correctly less often than they would be with an unadapted system, despite the data set's consistent domain.",
"These approaches trade off long-term gains in BLEU and high one-shot recall for low zero-shot recall, which could be frustrating for users who may perceive the degradation in quality for terms appearing for the first time in a document.",
"The lasso technique is the only one that shows an improvement in R0 over the baseline.",
"However, lasso has considerably lower one-shot recall compared to the other adaptation methods, implying that it often must observe a translated term more than once to acquire it.",
"Appendix A shows similar experiments for several other datasets.",
"For a better understanding of the results described in the previous section, we conduct an analysis varying the units of the proposed metrics, while focusing on full and lasso adaptation.",
"For the first variant, only truly novel words are taken into account, i.e. words in the test set that do not appear in the training data.",
"Results for these experiments are depicted in Table",
"3. It is apparent that the findings of Table 2 are confirmed, and that relative differences are amplified.",
"This can be explained by the reduced number of total occurrences considered, which is only 310 words in this data set.",
"It is also important to note that all of these Method R0 R1 R0+1 baseline 27.1 40.7 29.9 full 26.1 63.0 33.8 lasso 31.9 53.1 36.3 Table 3: Results on Autodesk data calculating the metrics only for truly novel content words, i.e. ones that do not occur in the training data.",
"words are made up of known subwords 7 , since our NMT system does not include a copying mechanism and is thus constrained to the target vocabulary.",
"Further results using the raw subword output 8 of the MT systems are depicted in Table 4: R0 for the lasso method is degraded only slightly below the baseline (-1%, compared to +2% for the regular metric), the findings for R1 and R0+1 remain the same as observed before.",
"Compared to the results for novel words this indicates that the improvement in terms of R0 for lasso mostly come from learning new combinations of subwords.",
"To summarize: In some cases, the strong gains in corpus-level translation quality achieved by fine tuning an NMT model come at the expense of zero-shot recall of content words.",
"This concerning impact of adaptation could affect practical user experience.",
"Existing regularization methods mitigate this effect to some degree, but there may be more effective techniques for immediate adaptation that have yet to be developed.",
"The proposed metrics R0, R1, and R0+1 are useful for measuring immediate adaptation performance, which is crucial in adaptive CAT systems.",
"7 The test set does not contain any unknown characters.",
"8 Note that this includes all tokens, not just parts of content words."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"This work exploits translation data as a source of semantically relevant learning signal for models of word representation.",
"In particular, we exploit equivalence through translation as a form of distributional context and jointly learn how to embed and align with a deep generative model.",
"Our EMBEDALIGN model embeds words in their complete observed context and learns by marginalisation of latent lexical alignments.",
"Besides, it embeds words as posterior probability densities, rather than point estimates, which allows us to compare words in context using a measure of overlap between distributions (e.g. KL divergence).",
"We investigate our model's performance on a range of lexical semantics tasks achieving competitive results on several standard benchmarks including natural language inference, paraphrasing, and text similarity.",
"Natural language processing applications often count on the availability of word representations trained on large textual data as a means to alleviate problems such as data sparsity and lack of linguistic resources (Collobert et al., 2011; Socher et al., 2011; Tu et al., 2017; Bowman et al., 2015).",
"Traditional approaches to inducing word representations circumvent the need for explicit semantic annotation by capitalising on some form of indirect semantic supervision.",
"A typical example is to fit a binary classifier to detect whether or not a target word is likely to co-occur with neighbouring words (Mikolov et al., 2013).",
"If the binary classifier represents a word as a continuous vector, that vector will be trained to be discriminative of the contexts it co-occurs with, and thus words in similar contexts will have similar representations.",
"The underlying assumption is that context (e.g. neighbouring words) stands for the meaning of the target word (Harris, 1954; Firth, 1957).",
"The success of this distributional hypothesis hinges on the definition of context and different models are based on different definitions.",
"Importantly, the nature of the context determines the range of linguistic properties the representations may capture (Levy and Goldberg, 2014b).",
"For example, Levy and Goldberg (2014a) propose to use syntactic context derived from dependency parses.",
"They show that their representations are much more discriminative of syntactic function than models based on immediate neighbourhood (Mikolov et al., 2013).",
"In this work, we take lexical translation as indirect semantic supervision (Diab and Resnik, 2002).",
"Effectively we make two assumptions.",
"First, that every word has a foreign equivalent that stands for its meaning.",
"Second, that we can find this equivalent in translation data through lexical alignments.",
"1 For that we induce both a latent mapping between words in a bilingual sentence pair and distributions over latent word representations.",
"To summarise our contributions: we model a joint distribution over sentence pairs that generates data from latent word representations and latent lexical alignments; we embed words in context mining positive correlations from translation data; we find that foreign observations are necessary for generative training, but test time predictions can be made monolingually; we apply our model to a range of semantic natural language processing tasks showing its usefulness.",
"1 These assumptions are not new to the community, but in this work they lead to a novel model which reaches more applications.",
"4 expands on the relation to other uses of bilingual data for word representation.",
"In a nutshell, we model a distribution over pairs of sentences expressed in two languages, namely, a language of interest L1 , and an auxiliary language L2 which our model uses to mine some learning signal.",
"Our model, EMBEDALIGN , is governed by a simple generative story:",
"1. sample a length m for a sentence in L1 and a length n for a sentence in L2",
"; 2. generate a sequence z 1 , . . . , z m of d dimensional random embeddings by sampling independently from a standard Gaussian prior;",
"3. generate a word observation x i in the vocabulary of L1 conditioned on the random embedding z i ; 4. generate a sequence a i , . . . , a n of n random alignmentseach maps from a position a j in x m 1 to a position j in the L2 sentence;",
"5. finally, generate an observation y j in the vocabulary of L2 conditioned on the random embedding z a j that stands for x a j .",
"The model is parameterised by neural networks and parameters are estimated to maximise a lowerbound on log-likelihood of joint observations.",
"In the following, we present the model formally ( 2.1), discuss efficient training ( 2.2), and concrete architectures ( 2.3).",
"Notation We use block capitals (e.g. X ) for random variables, lowercase letters (e.g. x ) for assignments, and the shorthand X m 1 for a sequence X 1 , . . . , X m .",
"Boldface letters are reserved for deterministic vectors (e.g. v ) and matrices (e.g. W ).",
"Finally, E [ f ( Z ); ] denotes the expected value of f ( z ) under a density q ( z | ) .",
"We model a joint distribution over bilingual parallel data, i.e., L1 L2 sentence pairs.",
"An observation is a pair of random sequences h X m 1 , Y n 1 i , where a random variable X ( Y ) takes on values in the vocabulary of L1 ( L2 ).",
"For ease of exposition, the length m ( n ) of each sequence is assumed observed throughout.",
"The L1 sentence is generated one word at a time from a random sequence of latent embeddings Z m 1 , each Z taking on values in R d .",
"The L2 sentence is generated one word at a time given a random sequence of latent alignments A n 1 , where A j { 1 , . . . , m } is the position in the L1 sentence to which y j aligns.",
"2 For i { 1 , . . . , m } and j { 1 , . . . , n } the generative story is Z i N ( 0 , I ) (1a) X i | z i Cat( f ( z i ; )) (1b) A j | m U (1 /m ) (1c) Y j | z m 1 , a j Cat( g ( z a j ; )) (1d) and Figure 1 is a graphical depiction of our model.",
"We map from latent embeddings to categorical distributions over either vocabulary using a neural network whose parameters are deterministic and collectively denote by (the generative parameters ).",
"The marginal likelihood of a sentence pair is shown in Equation (2).",
"P ( x m 1 , y n 1 | m, n ) = Z p ( z m 1 ) m Y i =1 P ( x i | z i ) n Y j =1 m X a j =1 P ( a j | m ) P ( y j | z a j )d z m 1 (2) Due to the conditional independences of our model, it is trivial to marginalise lexical alignments for any given latent embeddings z m 1 , but marginalising the embeddings themselves is intractable.",
"Thus, we employ amortised mean field variational inference using the inference model q ( z m 1 | x m 1 ) , m Y i =1 N ( z i | u i , diag( s i (cid:12) s i )) (3) where each factor is a diagonal Gaussian.",
"We map from x m 1 to a sequence u m 1 of independent posterior 2 We pad L1 sentences with NULL to account for untranslatable L2 words (Brown et al., 1993).",
"Instead, Schulz et al. (2016) generate untranslatable words from L2 contextan alternative we leave for future work.",
"mean (or location) vectors, where u i , ( h i ; ) , as well as a sequence s m 1 of independent standard deviation (or scale) vectors, where s i , ( h i ; ) , and h m 1 = enc( x m 1 ; ) is a deterministic encoding of the L1 sequence (we discuss concrete architectures in 2.3).",
"All mappings are realised by neural networks whose parameters are collectively denoted by (the variational parameters ).",
"Note that we choose to approximate the posterior without conditioning on y n 1 .",
"This allows us to use the inference model for monolingual prediction in absence of L2 data.",
"Variational and generative parameters are jointly point-estimated to attain a local optimum of the evidence lowerbound (Jordan et al., 1999): log P ( x m 1 , y n 1 | m, n ) m X i =1 E [log P ( x i | Z i ); u i , s i ] + n X j =1 E log m X a j =1 P ( a j | m ) P ( y j | Z a j ); u m 1 , s m 1 m X i =1 KL [ N ( u i , diag( s i (cid:12) s i )) ||N ( 0 , I )] .",
"The variational family is location-scale, thus we can rely on stochastic optimisation (Robbins and Monro, 1951) and automatic differentiation (Bay-din et al., 2015) with reparameterised gradient estimates (Kingma and Welling, 2014; Rezende et al., 2014; Titsias and Lazaro-Gredilla, 2014).",
"Moreover, because the Gaussian density is an exponential family, the KL terms in (4) are available in closed-form (Kingma and Welling, 2014, Appendix B).",
"The likelihood terms in the ELBO (4) require evaluating two softmax layers over rather large vocabularies.",
"This makes training prohibitively slow and calls for efficient approximation.",
"We employ an approximation proposed by Botev et al. (2017) termed complementary sum sampling (CSS), which we review in this section.",
"Consider the likelihood term log P ( X = x | z ) that scores an observation x given a sampled embedding z we use serif font x to distinguish a particular observation from an arbitrary event x X in the support.",
"The exact class probability P ( X = x | z ) = exp( u ( z, x )) P x X exp( u ( z, x )) (5) requires a normalisation over the complete support.",
"CSS works by splitting the support into two sets, a set C that is explicitly summed over and must include the positive class x , and another set N that is a subset of the complement set X \\ C .",
"We obtain an estimate for the normaliser X x C exp( u ( z, x )) + X x N ( x ) exp( u ( z, x )) (6) by importanceor Bernoulli-sampling from the support using a proposal distribution Q ( X ) , where ( x ) corrects for bias as N tends to the entire complement set.",
"In this paper, we design C and N per training mini-batch: we take C to consist of all unique words in a mini-batch of training samples and N to consist of 10 3 negative classes uniformly sampled from the complement set X \\ C , in which case ( x ) = 10 3 |X \\ C| .",
"3 CSS makes it particularly easy to approximate likelihood terms such as those with respect to L2 in Equation (4).",
"Because those terms depend on a marginalisation over alignments, an approximation must give support to all words in the sequence y n 1 .",
"With CSS this is extremely simple, we just need to make sure all unique words in y n 1 are in the set C which our mini-batch procedure does guarantee.",
"Botev et al. (2017) show that CSS is rather stable and superior to the most popular softmax approximations.",
"Besides being simple to implement, CSS also addresses a few problems with other approximations.",
"To name a few: unlike importance sampling approximations, CSS converges to the exact softmax with bounded computation (it takes as many samples as there are classes).",
"Unlike hierarchical softmax , CSS only affects training, that is, at test time we simply use the entire support instead of the approximation.",
"Without a softmax approximation, inference for our model would take time proportional to O ( m v x + m v y + m n ) where v x ( v y ) corresponds to the size of the vocabulary of L1 ( L2 ).",
"The first term ( m v x ) corresponds to projecting from m latent embeddings to m categorical distributions over the vocabulary of L1 .",
"The second term ( m v y ) corresponds to projecting the same m latent embeddings to m categorical distributions over the vocabulary of L2 .",
"Finally, the third term ( m n ) is due to marginalisation of alignments.",
"3 We sample uniformly from the complement set until we have 10 3 unique classes.",
"We realise this operation outside the computation graph providing C and N as inputs to each training iteration, but a GPU-based solution is also possible.",
"Note, however, that with the CSS approximation we drop the dependency on vocabulary sizes (as the combined sizes of C and N is an independent constant).",
"Moreover, if inference is performed on GPU, the squared term ( m n m 2 ) is amortised due to parallelism.",
"Thus, while training our model is somewhat slower than monolingual models of word representation, which typically run in O ( m ) , it is not at all impracticably slower.",
"Here we present the neural network architectures that parameterise the different generative and variational components of 2.1.",
"Refer to Appendix B for an illustration.",
"Generative model We have two generative components, namely, a categorical distribution over the vocabulary of L1 and another over the vocabulary of L2 .",
"We predict the parameter (event probabilities) of each distribution with an affine transformation of a latent embedding followed by the softmax nonlinearity to ensure normalisation: f ( z i ; ) = softmax ( W 1 z i + b 1 ) (7a) g ( z a j ; ) = softmax (cid:0) W 2 z a j + b 2 (cid:1) (7b) where W 1 R v x d , b 1 R v x , W 2 R v y d , b 2 R v y , and v x ( v y ) is the size of the vocabulary of L1 ( L2 ).",
"With the approximation of 2.2, we replace the L1 softmax layer (7a) by exp (cid:0) z > i c x + b x (cid:1) normalised by the CSS estimate (6) at training, and similarly for the L2 softmax layer (7b).",
"In that case, we have parameters for c x , c y R d deterministic embeddings for x and y , respectivelyas well as bias terms b x , b y R .",
"Inference model We predict approximate posterior parameters using two independent transformations u i = M 1 h i + d 1 (8a) s i = softplus( M 2 h i + d 2 ) (8b) of a shared representation h i R d x of the i th word in the L1 sequence x m 1 where M 1 , M 2 R d d x are projection matrices, d 1 , d 2 R d are bias vectors, and the softplus nonlinearity ensures that standard deviations are non-negative.",
"To obtain the deterministic encoding h m 1 , we employ two different architectures: (1) a bag-of-words (BOW) encoder, where h i is a deterministic projection of x i onto R d x ; and (2) a bidirectional (BIRNN) encoder, where h i is the element-wise sum of two LSTM hidden states ( i th step) that process the sequence in opposite directions.",
"We use 128 units for deterministic embeddings, and 100 units for LSTMs (Hochreiter and Schmidhuber, 1997) and latent representations (i.e. d = 100 ).",
"We start the section describing the data used to estimate our model's parameters as well as details about the optimiser.",
"The remainder of the section presents results on various benchmarks.",
"Training data We train our model on bilingual parallel data.",
"In particular, we use parliament proceedings (Europarl-v7) (Koehn, 2005) from two language pairs: English-French and English-German.",
"4 We employed very minimal preprocessing, namely, tokenisation and lowercasing using scripts from MOSES (Koehn et al., 2007), and have discarded sentences longer than 50 tokens.",
"Table 1 lists more information about the training data, including the English-French Giga web corpus (Bojar et al., 2014) which we use in 3.4.",
"5 Corpus Sentence pairs Tokens Europarl EN-FR 1 .",
"Optimiser For all architectures, we use the Adam optimiser (Kingma and Ba, 2014) with a learning rate of 10 3 .",
"Except where explicitly indicated, we train our models for 30 epochs using mini batches of 100 sentence pairs; use validation alignment error rate for model selection; train every model 10 times with random Glorot initialisation (Glorot and Bengio, 2010) and report mean and standard deviation; anneal the KL terms using the following schedule: we use a scalar from 0 to 1 with additive steps of size 10 3 every 500 updates.",
"This means that at the beginning of the training, we allow the model to overfit to the likelihood terms, but towards the end we are optimising the true ELBO (Bowman et al., 2016).",
"It is also important to highlight that we do not employ regularisation techniques (such as batch normalisation, dropout, or L 2 penalty) for they did not seem to yield consistent results.",
"Since our model leverages learning signal from parallel data by marginalising latent lexical alignments, we use alignment error rate to double check whether the model learns sensible word correspondences.",
"Intrinsic assessment of word alignment quality requires manual annotation.",
"For English-French, we use the NAACL English-French hand-aligned data ( 37 sentence pairs for validation and 447 for test) (Mihalcea and Pedersen, 2003).",
"For English-German, we use the data by Pado and La-pata (2006) ( 98 sentence pairs for validation and 987 for test).",
"Alignment quality is then measured in terms of alignment error rate (AER) (Och and Ney, 2000)an F-measure over predicted alignment links.",
"For prediction we condition on the posterior means E [ Z m 1 ] which is just the predicted variational means u m 1 and select the L1 position for which P ( y j , a j | u m 1 ) is maximum (a form of approximate Viterbi alignment).",
"We start by analysing validation results and selecting amongst a few variants of EMBEDALIGN .",
"We investigate the use of annealing and the use of a bidirectional encoder in the variational approximation.",
"Table 2 (3) lists AER for EN-FR (EN-DE ) as well as accuracy of word prediction.",
"It is clear that both annealing (systems decorated with subscript ) and bidirectional representations improve the results across the board.",
"In the rest of the paper we still investigate whether or not recurrent encoders help, but we always report results based on annealing.",
"In order to establish baselines for our models we report IBM models 1 and 2 (Brown et al., 1993).",
"In a nutshell, IBM models 1 and 2 both estimate the conditional P ( y j | x m 1 ) = P ma j =1 P ( a j | m ) P ( y j | x a j ) by marginalisation of latent lexical alignments.",
"The only difference between the two models is the prior over alignments, which is uniform for IBM1 and categorical for IBM2.",
"An important difference between IBM models and EMBEDALIGN concerns the lexical distribution.",
"IBM models are parameterised with independent categorical parameters, while our model instead is parameterised by a neural network.",
"IBM models condition on a single categorical event x a j , namely, the word aligned to.",
"Our model instead conditions on the latent embedding z a j that stands for the word aligned to.",
"In order to establish even stronger conditional alignment models, we embed the conditioning words and replace IBM1's independent parameters by a neural network (single hidden layer MLP).",
"We call this model a neural IBM1 (or NIBM for short).",
"Note that in an IBM model, the sequence x m 1 is never modelled, therefore we can condition on it without restrictions.",
"For that reason, we also experiment with a bidirectional LSTM encoder and condition lexical distributions on its hidden states.",
"Table 4 shows AER for test predictions.",
"First observe that neural models outperform classic IBM1 by far, some of them even approach IBM2's performance.",
"Next, observe that bidirectional encodings make NIBM much stronger at inducing good word-1015 to-word correspondences.",
"EMBEDALIGN cannot catch up with NIBM, but that is not necessarily surprising.",
"Note that NIBM is a conditional model, thus it can use all of its capacity to better explain L2 data.",
"EMBEDALIGN , on the other hand, has to find a compromise between generating both streams of the data.",
"To make that point a bit more obvious, Table 5 (6) lists accuracy of word prediction for EN-FR (EN-DE ).",
"Note that, without sacrificing L2 accuracy, and sometimes even improving it, EMBEDALIGN achieves very high L1 accuracy.",
"This still does not imply that induced representations have captured aspects of lexical semantics such as word senses.",
"All this means is that we have induced features that are jointly good at reconstructing both streams of the data one word at time.",
"Of course it is tempting to conclude that our models must be capturing some useful generalisations.",
"For that, the next sections will investigate a range of semantic NLP tasks.",
"The English lexical substitution task (LST) consists in selecting a substitute word for a target word in context (McCarthy and Navigli, 2009).",
"In the most traditional variant of the task, systems are presented with a list of potential candidates and this list must be sorted by relatedness.",
"Dataset The LST dataset includes 201 target words present in 10 sentences/contexts each, along with a manually annotated list of potential replacements.",
"The data are split in 300 instances for validation and 1 , 710 for test.",
"Systems are evaluated by Model cos KL RANDOM 30 .",
"comparing the predicted ranking to the manual one in terms of generalised average precision (GAP) (Melamud et al., 2015).",
"Prediction We use EMBEDALIGN to encode each candidate (in context) as a posterior Gaussian density.",
"Note that this task dispenses with inferences about L2 .",
"Each candidate is compared to the target word in context through a measure of overlap between their inferred densitieswe take KL divergence.",
"We then rank candidates using this measure.",
"Table 7 lists GAP scores for variants of EMBEDALIN (bottom section) as well as some baselines and other established methods (top section).",
"For comparison, we also compute GAP by sorting candidates in terms of cosine similarity, in which case we take the Gaussian mean as a summary of the density.",
"The top section of the table contains systems reported by Melamud et al. (2015) (RANDOM and SKIPGRAM ) and by Brazinskas et al. (2017) (BSG).",
"Note that both SKIPGRAM (Mikolov et al., 2013) and BSG were trained on the very large ukWaC English corpus (Ferraresi et al., 2008).",
"SKIPGRAM is known to perform remarkably well regardless of its apparent insensitivity to context (in terms of design).",
"BSG is a close relative of our model which gives SKIPGRAM a Bayesian treatment (also by means of amortised variational inference) and is by design sensitive to context in a manner similar to EMBEDALIGN , that is, through its inferred posteriors.",
"Our first observation is that cosine seems to outperform KL slightly.",
"Others have shown that KL can be used to predict directional entailment (Vil-nis and McCallum, 2014; Brazinskas et al., 2017), since LST is closer to paraphrasing than to entailment directionality may be a distractor, but we 1016 Model MR CR SUBJ MPQA SST TREC MRPC SICK-R SICK-E SST14 W 2 VEC 77.7 79.8 90.9 88.3 79.7 83.6 72.5/81.4 0.80 78.7 0.65/0.64 NMT 64.7 70.1 84.9 81.5 -82.8 -/--0.43/0.42 EN 57.6 66.2 70.9 71.8 58.0 62.9 70.3/80.1 0.62 73.7 0.54/0.55 EN-FR 63.5 71.5 78.9 82.3 65.1 62.1 71.4/80.5 0.69 75.9 0.69/0.59 EN-DE 64.0 68.9 77.9 81.8 65.1 59.5 71.2/80.5 0.69 74.8 0.62/0.61 COMBO 66.7 73.1 82.4 84.8 69.2 67.7 71.8/80.7 0.73 77.4 0.62/0.61 Table 8: English sentence evaluation results: the last four rows correspond to the mean of 10 runs with EMBEDALIGN models.",
"leave it as a rather speculative point.",
"One additional point worth highlighting: the middle section of Table 7.",
"EN BoW and EN BiRNN show what happens when we do not give EMBEDALIGN L2 supervision at training.",
"That is, imagine the model of Figure 1 without the bottom plate.",
"In that case, the model representations overfit for L1 word-by-word prediction.",
"Without the need to predict any notion of context (monolingual or otherwise), the representations drift away from semantic-driven generalisations and fail at lexical substitution.",
"Conneau et al. (2017) developed a framework to evaluate unsupervised sentence level representations trained on large amounts of data on a range of supervised NLP tasks.",
"We assess our induced representations using their framework on the following benchmarks evaluated on classification accuracy (MRPC is further evaluated on F1) MR classification of positive or negative movie reviews; SST fined-grained labelling of movie reviews from the Stanford sentiment treebank (SST); TREC classification of questions into k -classes; CR classification of positive or negative product reviews; SUBJ classification of a sentence into subjective or objective; MPQA classification of opinion polarity; SICK-E textual entailment classification; MRPC paraphrase identification in the Microsoft paraphrase corpus; as well as the following benchmarks evaluated on the indicated correlation metric(s) SICK-R semantic relatedness between two sentences ( Pearson); SST-14 semantic textual similarity ( Pearson/Spearman).",
"Prediction We use EMBEDALIGN to annotate every word in the training set of the benchmarks above with the posterior mean embedding in context.",
"We then average embeddings in a sentence and give that as features to a logistic regression classifier trained with 5 -fold cross validation.",
"6 For comparison, we report a SKIPGRAM model (here indicated as W 2 VEC ) as well as a model that uses the encoder of a neural machine translation system (NMT) trained on English-French Europarl data.",
"In both cases, we report results by Conneau et al. (2017).",
"Table 8 shows the results for all benchmarks.",
"7 We report EMBEDALIGN trained on either EN-FR or EN-DE .",
"The last line (COMBO ) shows what happens if we train logistic regression on the concatenation of embeddings inferred by both EMBEDALIGN models, that is, EN-FR and EN-DE .",
"Note that these two systems perform sometimes better sometimes worse depending on the benchmark.",
"There is no clear pattern, but differences may well come from some qualitative difference in the induced latent space.",
"It is a known fact that different languages realise lexical ambiguities differently, thus representations induced towards different languages are likely to capture different generalisations.",
"8 As COMBO results show, the representations induced from different corpora are somewhat complementary.",
"That same observation has guided paraphrasing models based on pivoting (Bannard and Callison-Burch, 2005).",
"Once more we report a monolingual variant of EMBEDALIGN (indicated by EN ) in an attempt to illustrate how crucial the 6 http://scikit-learn.org/stable/ 7 In Appendix A we provide bar plots marked with error bars ( 2 standard deviations).",
"8 We also acknowledge that our treatment of German is likely suboptimal due to the lack of subword features, as it can also be seen in AER results.",
"Word similarity benchmarks are composed of word pairs which are manually ranked out of context.",
"For completeness, we also tried evaluating our embeddings in such benchmarks despite our work being focussed on applications where context matters.",
"Prediction To assign an embedding for a word type, we infer Gaussian posteriors for all training instances of that type in context and aggregate the posterior means through an average (effectively collapsing all instances).",
"To cover the vocabulary of the typical benchmark, we have to use a much larger bilingual collection than Europarl.",
"Based on the results of 3.1, we decided to proceed with English-French only recall that models based on that pair performed better in terms of AER.",
"Results in this section are based on EMBEDALIGN (with bidirectional variational encoder) trained on the Giga web corpus (see Table 1 for statistics).",
"Due to the scale of the experiment, we report on a single run.",
"We trained on Giga with the same hyperparam-eters that we trained on Europarl, however, for 3 epochs instead of 30 (with this dataset an epoch amounts to 183 , 000 updates).",
"Again, we performed model selection on AER.",
"Table 9 shows the results for several datasets using the framework of Faruqui and Dyer (2014a).",
"Note that EMBEDALIGN was designed to make use of context information, thus this evaluation setup is a bit unnatural for our model.",
"Still, it outperforms SKIPGRAM on 5 out of 13 benchmarks, in particular, on SIMLEX-999, whose relevance has been argued by Upadhyay et al. (2016).",
"We also remark that this model achieves 0 .",
"25 test AER and 45 .",
"16 test GAP on lexical substitutiona considerable improvement compared to models trained on Europarl and reported in Tables 4 (AER) and 7 (GAP).",
"Our model is inspired by lexical alignment models such as IBM1 (Brown et al., 1993), however, we generate words y n 1 from a latent vector representation z m 1 of x m 1 , rather than directly from the observation x m 1 .",
"IBM1 takes L1 sequences as conditioning context and does not model their distribution.",
"Instead, we propose a joint model, where L1 sentences are generated from latent embeddings.",
"There is a vast literature on exploiting multilingual context to strengthen the notion of synonymy captured by monolingual models.",
"Roughly, the literature splits into two groups, namely, approaches that derive additional features and/or training objectives based on pre-trained alignments (Klementiev et al., 2012; Faruqui and Dyer, 2014b; Luong et al., 2015; Suster et al., 2016), and approaches that promote a joint embedding space by working with sentence level representations that dispense with explicit alignments (Hermann and Blunsom, 2014; AP et al., 2014; Gouws et al., 2015; Hill et al., 2014).",
"The work of Kocisky et al. (2014) is closer to ours in that they also learn embeddings by marginalising alignments, however, their model is conditionalmuch like IBM modelsand their embeddings are not part of the probabilistic model, but rather part of the architecture design.",
"The joint formulation allows our latent embeddings to har-vest learning signal from L2 while still being driven by the learning signal from L1 in a conditional model the representations can become specific to alignment deviating from the purpose of well representing the original language.",
"In 3 we show substantial evidence that our model performs better when using both learning signals.",
"Vilnis and McCallum (2014) first propose to map words into Gaussian densities instead of point estimates for better word representation.",
"For example, a distribution can capture asymmetric relations that 1018 a point estimate cannot.",
"Brazinskas et al. (2017) recast the skip-gram model as a conditional variational auto-encoder.",
"They induce a Gaussian density for each occurrence of a word in context, and for that their model is the closest to ours.",
"Additionally, they estimate a Gaussian prior per word type thus representing both types and occurrences.",
"Unlike our model, the Bayesian skip-gram is not trained generatively by reconstructing the data, but rather discriminatively by prediction of overlapping sets of neighbouring words.",
"We have presented a generative model of word representation that learns from positive correlations implicitly expressed in translation data.",
"In order to make these correlations surface, we induce and marginalise latent lexical alignments.",
"Embedding models such as CBOW and skip-gram (Mikolov et al., 2013) are essentially speaking supervised classifiers.",
"This means they depend on somewhat artificial strategies to derive labelled data from monolingual corporawords far from the central word still have co-occurred with it even though they are taken as negative evidence.",
"Training our proposed model does not require a heuristic notion of negative training data.",
"However, the model is also based on a somewhat artificial assumption: L1 words do not necessarily need to have an L2 equivalent and, even when they do, this equivalent need not be realised as a single word.",
"We have shown with extensive experiments that our model can induce representations useful to several tasks including but not limited to alignment (the task it most obviously relates to).",
"We observed interesting results on semantic natural language processing benchmarks such as natural language inference, lexical substitution, paraphrasing, and sentiment classification.",
"We are currently expanding the notion of distributional context to multiple auxiliary foreign languages at once.",
"This seems to only require minor changes to the generative story and could increase the model's disambiguation power dramatically.",
"Another direction worth exploring is to extend the model's hierarchy with respect to how parallel sentences are generated.",
"For example, modelling sentence level latent variables may capture global constraints and expose additional correlations to the model.",
"We thank Philip Schulz for comments on an earlier version of this paper as well as the anonymous NAACL reviewers.",
"One of the Titan Xp cards used for this research was donated by the NVIDIA Corporation.",
"This work was supported by the Dutch Organization for Scientific Research (NWO) VICI Grant nr. 277-89-002."
] | [
"abstain",
"method",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"other",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"other",
"objective",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"With the rapid growth in language processing applications, fairness has emerged as an important consideration in data-driven solutions.",
"Although various fairness definitions have been explored in the recent literature, there is lack of consensus on which metrics most accurately reflect the fairness of a system.",
"In this work, we propose a new formulation ACCUMULATED PREDICTION SENSITIVITY , which measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.",
"The metric attempts to quantify the extent to which a single prediction depends on a protected attribute, where the protected attribute encodes the membership status of an individual in a protected group.",
"We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.",
"It also correlates well with humans' perception of fairness.",
"We conduct experiments on two text classification datasets JIGSAWTOXICITY , and BIAS INBIOS , and evaluate the correlations between metrics and manual annotations on whether the model produced a fair outcome.",
"We observe that the proposed fairness metric based on prediction sensitivity is statistically significantly more correlated with human annotation than the existing counterfactual fairness metric.",
"Ongoing research is increasingly emphasizing the development of methods which detect and mitigate unfair social bias present in machine learning-based language processing models.",
"These methods come under the umbrella of algorithmic fairness which has been quantitatively expressed with numerous definitions (Mehrabi et al., 2019; Jacobs and Wallach, 2019).",
"These fairness definitions are * Work done while working at Amazon broadly categorized into two types, i.e, individual fairness and group fairness.",
"Individual fairness (e.g., counter-factual fairness (Kusner et al., 2017)) is aimed at evaluating whether a model gives similar predictions for individuals with similar personal attributes (e.g., age or race).",
"On the other hand, group fairness (e.g., statistical parity (Dwork et al., 2012)) evaluates fairness across cohorts with same protected attributes instead of individuals (Mehrabi et al., 2019).",
"Although these two broad categories of fairness define valid notions of fairness, human understanding of fairness is also used to measure fairness in machine learning models (Dhamala et al., 2021).",
"Existing studies often consider only one or two these verticals of measuring fairness.",
"In our work, we propose a formulation based on models sensitivity to input features the accumulated prediction sensitivity , to measure fairness of model predictions.",
"We establish its theoretical relationship with statistical parity (group fairness) and individual fairness (Dwork et al., 2012) metrics.",
"We then demonstrate the correlation between the proposed metric and human perception of fairness using empirical experiments.",
"Researchers have proposed metrics to quantify fairness based on a model's sensitivity to input features.",
"Specifically, Maughan and Near (2020); Ngong et al. (2020) propose a prediction sensitivity metric that attempts to quantify the extent to which a single prediction depends on a protected attribute.",
"The protected attribute encodes the membership status of an individual in a protected group.",
"Prediction sensitivity can be seen as a form of feature attribution, but specialized to the protected attribute.",
"In our work, we extend their concept of prediction sensitivity to propose accumulated prediction sensitivity .",
"Akin to the metric proposed by (Maughan and Near, 2020; Ngong et al., 2020), our metric also relies on model output's sensitivity to 5830 changes in input features.",
"Our metric generalizes their notion of sensitivity, where the model sensitivity to various input features can be weighted non-uniformly.",
"We show that the formulation follows certain properties for the chosen definitions of group and individual fairness and also present several methodologies to select weights assigned to sensitivity of model's output to input features.",
"For each selection, we present the correlation between the accumulated prediction sensitivity and human assessment of the model-output fairness.",
"We define our metric in Section 3 and present bounds on it (under settings when a classifier follows the selected group fairness or individual fairness constraints) in Sections 4 and 5, respectively.",
"Next, given that the human perception of fairness is not theoretically defined, we present an empirical study on two text classification tasks in Section 6.",
"We request a group of annotators to annotate whether they think that model output is biased against a specific gender and observe that the proposed metric correlates positively with more biased outcomes.",
"We then observe correlations between our metric and the stated human understanding of fairness.",
"We find that not only the proposed accumulated prediction sensitivity metric correlates positively with human perception of bias, but also beats an existing baseline based on counterfactual fairness.",
"Multiple efforts have looked into defining, measuring, and mitigating biases in NLP models (Sun et al., 2019; Mehrabi et al., 2019; Sheng et al., 2019).",
"Dwork et al. (2012) and Kusner et al. (2017) focus on individual fairness and propose novel classification approaches to ensure that a classification decision is fair towards an individual.",
"Another set of works focus on group fairness.",
"Corbett-Davies et al. (2017) present fair classification to ensure population from different race groups receive similar treatment.",
"Hardt et al. (2016) focus on shifting the cost of incorrect classification from disadvantaged groups.",
"Zhao and Chang (2020) measure group fairness in local regions.",
"Finally, Kearns et al. (2019) combine the best properties of the group and individual notions of fairness.",
"Multiple recent works also focus on developing new dataset and associated metrics to capture various types of biases.",
"For example, Dhamala et al. (2021) and Nangia et al. (2020) propose dataset and metrics to measure social biases and stereotypes in language model generations, Bolukbasi et al. (2016); Caliskan et al. (2017); Manzini et al. (2019) define metrics to access gender and race biases in word vector representations, and Wang et al. (2019) define metrics to quantify and mitigate biases in visual recognition task.",
"Ethayarajh (2020) propose Bernstein bounds to represent uncertainty about the bias.",
"Majority of these bias metrics are automatically computed, for example, using a regard classifier (Sheng et al., 2019), sentiment classifier (Dhamala et al., 2021), toxicity classifier (Dixon et al., 2018) or true positive rate difference between privileged and underprivileged groups (De-Arteaga et al., 2019).",
"A few works additionally validate the alignment of these automatically computed bias metrics with human understanding of biases by collecting annotations of biases on a subset of test data from crowd-workers (Sheng et al., 2019; Dhamala et al., 2021).",
"Blodgett et al. (2021, 2020) discuss the limitations of several of these bias datasets and measurements.",
"However, the majority of existing bias metrics are specific to the model type and the application domain used, they may not be tested for correlation with human judgement of biases, and their relationship to existing definitions of fairness has not been explored.",
"Additionally, metrics such as true positive or error difference between groups requires ground truth labels, thereby making their computation in real-time systems difficult.",
"Speicher et al. (2018) have attempted to present unified approach to measuring group and individual fairness via inequality indices, however we note that such metrics are non-trivial to extend to unstructured data such as text.",
"For example, gender information in a text may be subtle (e.g. mention of softball) and it is unclear whether presence of this word should be considered to impact the genderness of the text.",
"Accumulated prediction sensitivity metric, presented in this paper, attempts to address all the above limitations of existing bias metrics.",
"We acknowledge that the proposed metric is yet to be associated with other notions of fairness (e.g. preference based notion of fairness (Zafar et al., 2017)).",
"Below, we define accumulated prediction sensitivity , a metric that captures the sensitivity of a model to protected attributes.",
"Let x X be a feature vector drawn from the input space X .",
"Let w , v be stochastic vectors whose entries are non-negative values that sum to one.",
"Given x , let f be a K -class classifier, such that f ( x ) = [ f 1 ( x ) , .., f k ( x ) , .., f K ( x )] denotes the K -dimensional probability output generated by the classifier.",
"We define accumulated prediction sensitivity P as: P = w T Jv ; where J ( k, i ) = (cid:12)(cid:12)(cid:12)(cid:12) f k ( x ) x i (cid:12)(cid:12)(cid:12)(cid:12) .",
"J is a matrix 1 such that the ( k, i ) th entry is (cid:12)(cid:12)(cid:12) f k ( x ) x i (cid:12)(cid:12)(cid:12) , where x i is the i th entry in x .",
"The product w TJ sums the absolute derivatives | f k ( x ) x i | across f k , k = 1 ,",
".., K and returns a vector of summed derivatives with respect to each x i x .",
"The product of v with w TJ further averages the derivatives across all the features x i x to yield the scalar P .",
"The value f k ( x ) x i captures the expected change in model output for the k th class given a perturbation in x i .",
"If x i is a protected feature, arguably a smaller value of f k ( x ) x i implies a fairer model; as then the model's outcome does not change sharply with changes in x i .",
"To capture the sensitivity of the model with respect to the protected features, one also needs to choose v judiciously.",
"For example, given the explicit set of protected features in x , one can select v such that only entries corresponding to those features are assigned a non-zero value, while the rest are set to zero.",
"Given this heuristic, we expect the value P to be smaller for fairer models.",
"In the following sections, we connect the accumulated prediction sensitivity to two known notions of fairness and human perception of fairness.",
"Given a set of protected features (e.g. gender), a model satisfies statistical parity if model outcome is independent of the protected features (we note that identifying protected features may not always be feasible in the real world).",
"We represent the feature vector x = [ x p , x l ] , where x p is the set of protected features and x l is the remainder.",
"Accordingly, we choose v to be a vector such that the entries that sum | f k ( x p ) x i | x p x p in J are nonzero; and zero otherwise.",
"This choice is intuitive as 1 Note that we use the following notation scheme in this paper bold capital letters for matrices, bold small letters for vectors and un-bolded letters for scalars.",
"then we sum the gradients in J that correspond to protected features and measure model's sensitivity to them.",
"The predictor f ( x ) will satisfy statistical parity if f ( x p , x l ) = f ( x (cid:48) p , x l ) x p (cid:54) = x (cid:48) p .",
"Given this, we state the following theorem.",
"Theorem 1.",
"Given a vector v with non-zero entries corresponding to x p and zero entries for x l , if the predictor f ( x ) satisfies statistical parity with respect to x p , accumulated prediction sensitivity will be zero.",
"Proof : If f ( x ) satisfies statistical parity with respect to x p , the values f k ( x ) x p x p x p will be all zeros.",
"This is due to the fact that the function f k ( x ) can not be defined based on entries x p x p for it to be independent of them.",
"Therefore, for every multiplication in the product Jv , either the entry f k ( x ) x p will be 0 or the entry in v corresponding to x l will be 0.",
"Hence, P will be 0.",
"Appendix A presents empirical results in computing P on a synthetic dataset.",
"We construct a dataset where a feature (hair length) correlates with a protected attribute (gender).",
"We show that if the modeler unintentionally uses the correlated feature while attempting to build a classifier with statistical parity, our metric can be used for evaluation.",
"Dwork et al. (2012) state the notion of individual based fairness as: \" We interpret the goal of mapping similar people similarly to mean that the distributions assigned to similar people are similar \".",
"They propose adding a Lipschitz property constraint during the classifier optimization.",
"Given a loss function L defined to optimize the parameters of the classifier f ( x ) , a distance function d ( x , x (cid:48) ) that computes distance between data-points x , x (cid:48) , another distance function D ( f ( x )) , f ( x (cid:48) )) that computes distance between classifier predictions on x , x (cid:48) and a constant L , Dwork et al. (2012) propose the following constrained optimization.",
"It is natural to choose an Lp norm (Bourbaki, 1987) for d and D .",
"For a classifier f that is trained with the above constrained optimization and the choice of distance metrics D , d is an Lp norm, we state the following.",
"Theorem 2.",
"If the predictor f ( x ) is trained with the constrained optimization stated in Eq.",
"(2) , the accumulated prediction sensitivity will be upper bounded by L .",
"Proof : We restate the constraint in Eq.",
"(2) as (Note that the inequality sign does not change as distance metrics D , d are required to be positive for x (cid:54) = x (cid:48) ) x (cid:54) = x (cid:48) , L > D ( f ( x ) , f ( x (cid:48) )) d ( x , x (cid:48) ) .",
"Given the inequality holds for any pair of x , x (cid:48) , it must also hold for an x (cid:48) of the following choice.",
"x (cid:48) = x + [0 , 0 , x i , 0 , 0] , where x i is a scalar perturbation in the i th entry in x .",
"For a chosen Lp norm, Eq (3) becomes L > [ (cid:80) Kk =1 | f k ( x ) f k ( x (cid:48) ) | p ] 1 p | x i | > [ | f k ( x ) f k ( x (cid:48) ) | p ] 1 p | x i | .",
"(4) Since each entry | f k ( x ) f k ( x (cid:48) ) | p , k = 1 ,",
"..K is expected to be non-zero and zeroing out all such entries (but one) will yield a lower value than the summation (cid:80) Kk =1 | f k ( x ) f k ( x (cid:48) ) | p .",
"We can rewrite Eq.",
"(4) as: | f k ( x ) f k ( x + [0 , 0 , x i , 0 , 0]) | | x i | .",
"Therefore, each entry in J is upper bounded by L .",
"As vectors v , w are stochastic and they compute weighted averages of bounded entries in J , P (defined in Eq.",
"(1)) must be less than or equal to L .",
"We also note that as L becomes larger, the constraint in the Eq.",
"(2) becomes looser.",
"Therefore, a higher value of L during optimization is expected to loosen the fairness constraint as well as the bound on fairness sensitivity.",
"This aligns with our intuition of lower values of P for fairer models.",
"We compute value of L on a synthetically generated classification data, optimized with the individual fairness constraint in equation 2.",
"The results are presented in Appendix B. 6 Correlations with Human Perception of Fairness While the conditional statistical parity and individual fairness establish theoretical constraints on the model behaviour (e.g. independence from protected features and similarity in prediction outcomes for similar data-points), humans may carry a different notion of fairness for model outcomes on individual data-points.",
"This notion may be based on their understanding of cultural norms, which in turn effect their decisions in identifying which model outputs could be considered biased.",
"In this section, we present experiments that correlate accumulated prediction sensitivity with human perception of fairness.",
"Given a data-point x and model prediction f ( x ) , we assign one of the K classes to the data-point.",
"In order to evaluate the human perception of fairness on the data-point, we request a group of annotators to evaluate the model prediction (taken as the arg-max of the model output) and assess whether they believe the output is biased.",
"For instance, given the social/cultural norms, a profession classifier assigning a data-point she worked in a hospital to nurse instead of doctor can be perceived as biased.",
"To correlate the accumulated prediction sensitivity P with the human understanding of fairness, we conduct experiments on two text classification datasets.",
"We describe the datasets below, followed by our choices for w and v .",
"We experiment with our proposed metric on two classification tasks, i.e, occupation classification on Bias in Bios dataset (De-Arteaga et al., 2019) 2 and toxicity classification with Jigsaw Toxicity dataset 3 .",
"We focus on these two datasets as they have been investigated in several previous studies (Pruksachatkun et al., 2021) and have been reported to carry significant presence of bias.",
"BIAS IN BIOS data (De-Arteaga et al., 2019) is purposed to train occupation classifier which predicts occupation given the biography of an individual.",
"For this data, the task classifier is an occupation classification model which is composed of a standard LSTM-based encoder combined with the output layer of 28 nodes, i.e, number of occupation classes.",
"JIGSAWTOXICITY dataset is commonly used to train toxic classifier which is tasked to predict if an input sentence is toxic or not.",
"This dataset has input sentences as the comments from Wikipedia's talk page edits labeled with the degree of toxicity.",
"In this dataset, the task classifier is a binary classifier trained to predict whether a comment is toxic or not.",
"We labeled the samples with >0.5 toxicity score as toxic and others as non-toxic to train the task classifier.",
"The task classifier trained with Jigsaw Toxicity dataset achieved an AUC of 0.957.",
"Table 4 in appendix summarizes the train/test/valid split for the 2 datasets.",
"The vector w sums up the absolute partial derivatives of f k ( x ) with respect to a given feature x i , k = 1 ,",
".., K .",
"In our setup, we consider input features to be the word embeddings and the matrix J is computed over the same.",
"Given a D dimensional word embedding, K classes and N words in x , J will be a matrix of size ( K ) ( DN ) .",
"In all our experiments, we choose w to be a uniform vector with entries 1 /K .",
"Such a choice assigns equal weight to the partial derivatives computed over each class.",
"One may chose to put a higher weight on derivatives computed over a specific class, if there is a reason to believe that the accumulated prediction sensitivity should be informed more with respect to that class.",
"For instance, for a classifier that stratifies medical images into various diseases (Agrawal et al., 2019), disparity in model performance with respect to malicious diseases can be considered more costly.",
"Therefore, derivatives for classes that represent more malicious disease can be weighted higher.",
"Through the vector v , we aim to select words in x that carry gendered information.",
"We use two formulations for the the vector v as discussed below.",
"In this setup, we use the set of gendered words from (Bolukbasi et al., 2016) and assign entries in v corresponding to those words as 1 / ( N g D ) , where N g is the count of gendered words in the data-point.",
"While prior work has used word matching to a pre-defined corpus of tokens describing various demographic cohorts (Bolukbasi et al., 2016), these corpus do not contain words that stereotypically are",
"associated with a particular cohort but may not be explicitly tied to that cohort.",
"For example, the word volleyball is associated with females in the analysis presented by (Dinan et al., 2020).",
"To capture this nuance, we propose using another classifier (that acts on the same dataset as used to train the original classifier, for which we aim to compute P ) and using it to identify tokens containing information about the protected attribute (e.g. gender).",
"We discuss the model training below.",
"Protected Status Model: To extend accumulated prediction sensitivity to settings with no explicit protected attribute, we train a protected status model g .",
"Given the data-point x , goal of the PSM model g ( x ) is to predict the protected attributes.",
"Given a trained g ( x ) , we then compute another matrix J g , where the ( j, i ) th entry is | g m ( x ) x i | ( g m is the probability outcomes corresponding to the m th protected attribute class; e.g. male in a gender classifier).",
"We then define an entry v i v as (cid:80) j J g ( m, i ) (the vector v is normalized to be stochastic).",
"Intuitively, the sum (cid:80) j J g ( m, i ) captures the model output sensitivity with respect to the input features x i and is expected to higher if x i carries more gendered information.",
"In our experiments, we train separate PSM models for gender sensitivity computation on Bias-in-bios and Jigsaw data-sets, as each data-point in these data-sets is additionally labeled with a binary gender class (male/female) 4 .",
"Gender PSMs predicts the associated gender given the datapoint x .",
"Training PSM on the same datasets used to train the task classifier f helps capture the gender stereotypes present in the respective datasets.",
"For instance, in a given dataset, if the word volleyball appears more often in the data-points that correspond to the female gender, the gender clas-sifier's sensitivity to this word is expected to be high as the classifier may pay higher emphasis to this word for gender classification.",
"We use the same model architecture as the task classifiers for PSM.",
"PSM for gender classification achieve an accuracy of 98.79% (Male Acc:98.84% Female Acc:98.17%) and 95.39% (Male Acc:95.92% Female Acc:96.22%) for Bias in bios and Jigsaw Toxicity datasets, respectively.",
"These accuracies are computed over the same train/test split as the task classifier.",
"4 We note that this is a limitation of this work as gender can be non-binary.",
"In addition to using the list of gendered words and PSM, we also test with a setting where we multiply the word embedding vectors to the proposed formulations of v .",
"We stack the word embedding vectors for each word x i x to obtain a vector of embeddings e i .",
"We perform an element-wise multiplication of the embedding vectors e i with the vector with entries 1 / ( N g D ) for gendered words or (cid:80) j J g ( j, i ) obtained using PSM.",
"This choice is motivated based upon the findings in (Han et al., 2020).",
"They leverage the magnitude of embedding vectors in determining saliency of the input words for the classification task at hand.",
"Their proposed methodology computes saliency maps over the features x i x by multiplying embedding vectors with partial derivatives of the class probabilities with respect to embedding vectors themselves.",
"We experiment with six fairness metrics.",
"Out of the six, one metric is a baseline based on counterfactual fairness and the rest are variants of the accumulated prediction sensitivity P .",
"Counter-factual Fairness (CF) : We use the counter-factual fairness definition mentioned in Garg et al. (2019) and compute the metric as the difference in model predictions between the original sample f ( x ) and its corresponding counter-factual gendered sample f ( x ) .",
"We take the L1 norm of the vector f ( x ) f ( x ) .",
"For example, we take the difference in predictions between the sample \"She practices dentistry\" and \"He practices den-tistry\", which is the corresponding counter-factual sample.",
"We use the definitional gender token substitutions from Bolukbasi et al. (2016) to create counter-factual samples.",
"uniform values 1 K and 1 DN , respectively.",
"This is a weak baseline as the choice of v does not provide any information regarding the gender-ness of the input words.",
"P2: Weighted Prediction Sensitivity based on PSM : In this setting, w is chosen to be a uniform vector, while v is chosen based on the PSM model.",
"P3: Weighted Prediction sensitivity + Embedding weights : In this setting, v is chosen based on the PSM model (akin to the metric in P2) which is further multiplied element-wise with the word embedding vectors.",
"P4: Hard gender weights based Prediction sensitivity : In this metric, we use the list of gendered words described in section 6.4.1 to determine v .",
"The value of entries in v is set to 1 DN g .",
"P5: Hard gender weights based prediction sensitivity + Embeddings : This setting is same as above, except entries in v are further multiplied element-wise with the word embedding vectors.",
"To evaluate whether the proposed prediction sensitivity correlates with human perception of fairness, we collect annotations from crowd workers using the Amazon Mechanical Turk platform.",
"Crowd workers are asked to annotate if a model prediction appears to be a biased prediction or not.",
"For Bias in Bios dataset, each sample presented to the annotators has the biography and occupation predicted by the model.",
"We collect annotations on a random sample of the test set.",
"For each biography and a predicted occupation, we ask annotators to label if the prediction is indicative of bias or if it is unbiased.",
"Bias refers to a situation where an occupation is incorrectly predicted based on the gender associated with the biography.",
"For instance, if the input biography is she studied at Harvard Medical School and practices dentistry. and is 5835 Example from the Bias in Bios dataset TC PSM Example from the Jigsaw Toxicity dataset TC PSM Table 2: Heat map for the vectors w TJ (top entry in each row) and v (bottom entry in each row) per input word x i .",
"EXAMPLES OF UNBIASED SAMPLES (The predicted profession is unrelated to gender stereotype about professions) BIO : She received a master's degree in computer science from the university of North Carolina at Chapel Hill.",
"Predicted Profession : Computer Scientist BIO : He received a master's degree in computer science from the university of North Carolina at Chapel Hill.",
"Predicted Profession : Computer Scientist EXAMPLES OF BIASED SAMPLES (Strongly biased predictions are based on associating a specific gender to a specific profession even when there are evidences against it in the biography) BIO : Mary has 25 years of experience in data analytics, business intelligence and information governance with fortune 100 companies.",
"BIO : He achieved a masters degree in nursing from the university of north Carolina at chapel hill Predicted Profession : Computer Scientist",
"predicted as nurse, then we call this prediction biased since the biography fits better for a doctor.",
"In case of unbiased predictions, the prediction is not expected to be influenced by the gender content in the biography.",
"Table 3 presents a sample of examples provided to the annotators for the Bias in bios dataset.",
"Each page in the annotation task consisted of ten biography-profession pairs.",
"We collect annotations for each biography-profession pair from at least three annotators and pick the label with majority vote.",
"Similarly for Jigsaw Toxicity dataset, each sample presented to the annotators contains the text and associated toxicity predicted by the model.",
"We restrict the set of annotators to be master annotators and the location of annotators to be Unites States.",
"Based on the initial pilot studies conducted in the Amazon Mechanical Turk platform, we setup a payment rate to ensure a fair compensation of at least 15$ /hour for all annotators that work at an average pace.",
"We annotated 900 test data-points from each dataset.",
"We note that these test data-points were misclassified by the classifiers f trained for each dataset.",
"While such a sampling may not conform to the true distribution of biased/unbiased model outcomes on the overall test set, we expect to get more biased samples amongst the misclassified samples.",
"The distribution between biased and unbiased outputs was about 55:45 for Bias in Bios and 50:50 for Jigsaw Toxicity .",
"For the Bias in Bios and Jigsaw Toxicity datsets, we obtained a Fliess' kappa of 0.43 and 0.47, respectively, amongst the three annotators.",
"This is considered a moderate level of agreement, which we believe is expected for an relatively ambiguous task to identify model outcomes influenced by gender.",
"We compute mutual information and bi-serial correlations as the primary measures of association between the human annotations and the accumulated model sensitivity .",
"Table 1 lists the bi-serial correlations and mutual information between manual annotations and the different fairness metrics.",
"First, we observe that correlations of the baseline with human judgement are mediocre (0.326 and 0.214) compared to the human judgement.",
"We attribute this to the fact that the metric attempts to quantify a fairly subjective assessment of bias that may have different interpretation (as also pointed out by the moderate level of annotation agreement across annotators).",
"However, the proposed variants of P have stronger correlations compared to the counter-factual baseline (except the method P1).",
"As expected, we see the smallest correlation for P1, since this metric does not account for gender-ness in v .",
"However, metrics that determine v based on PSM prediction sensitivity and gendered words get higher corre-5836 lations over P1 and the CF baseline.",
"Variant of P with v informed using the embedding vectors further lead to improved correlations.",
"We also observe weaker statistical significance in the case of Jigsaw Toxicity due to a weaker PSM.",
"We attribute this to the noise present in gender annotations for Jigsaw Toxicity dataset.",
"Hence, the performance of PSM in predicting the protected status is crucial for accurately measuring fairness.",
"In order to further analyse the effect of PSM, we look into heat-maps capturing w TJ and v separately.",
"As a reminder, the first quantity captures the weighted average of partial derivatives of class probabilites with respect to the input features, while the second quantity computes the weights assigned to sum up the aforementioned averages.",
"Table 2 shows while v mostly captures gendered words such as she, her and woman, it also captures words such as social, architecture and cheated to carry more gendered information compared to other words.",
"While these words conventionally are not gendered, for the datasets at hand, they seem to provide information whether the input data-point belongs to male/female gender.",
"We also note that w TJ weighs on occupation specific tokens such as \"physician\", \"executive\", etc.",
"This finding supports our motivations to compute v based on PSM and capturing feature attributions assigned to tokens that are implicitly related to a specific gender (instead of the definitional gender tokens only).",
"Hence, by incorporating PSM in computing P , we can capture bias present in nontrivial gendered tokens.",
"While the results showcase the promise of our metric, we draw the attention of the reader to the following considerations: (1) We observed that the metric quality depends on choice of the hyper-parameters w and v .",
"In this regard, our metric is not different from other metrics that also depend on a hyper-parameter choice.",
"For example, any classifier based metric has a threshold parameter and counterfactual fairness metrics rely on hyper-parameters such as the selected gendered words.",
"(2) Our metric only works for models for which gradients can be computed.",
"Most modern deep learning based models carry this property.",
"(3) Lastly, we note that it is hard to interpret the absolute value of the proposed metric.",
"The metric value should be used for relative comparison of two models which share input feature space and label space.",
"In addition, we note two considerations for relying on a PSM classifier.",
"First, training it requires access to gender labels.",
"Second, the PSM model itself could be biased.",
"Given that gender labels may not always be available for the dataset used to train model at hand, we study the impact of transferring a PSM model trained on a different dataset on computing our metric.",
"We also evaluate the effect of bias in PSM model on the overall metric value and present results in the Appendix D. We make observations such as the quality of the metric degrades as PSM becomes more biased.",
"Based on these observations, we recommend that if modeler is not able to obtain high performance PSM models, they fall back to using sources such as gendered words for computing the vector v .",
"Evaluating fairness is a challenging task as it requires selecting a notion of fairness (e.g. group or individual fairness) and then identifying metrics that can capture these notions of fairness while evaluating a classifier.",
"Additionally, certain notions of fairness may not be well defined and can change based upon social norms (e.g. volleyball being closely associated with females); that may seep into the dataset at hand.",
"In this work, we define an accumulated prediction sensitivity metric that relies on the partial derivatives of model's class probabilities with respect to input features.",
"We establish properties of this metric with respect to the three verticals of fairness metrics: group, individual and human-perception based.",
"We provide bounds on the metric's value when a predictor is expected to carry statistical parity or is trained with individual fairness.",
"We also evaluate this metric with fairness as perceived through human evaluation of model outputs.",
"We test variants of the proposed metric against an existing baseline derived from counter-factual fairness and observe better mutual information and correlation.",
"Specifically, a variant of the metric that relies on a Protected Status Model (that identifies tokens that carry gender information but may not conventionally be considered gendered) yields the best correlation with the human evaluation.",
"dividual fairness (Mehrabi et al., 2019).",
"We also aim to test the metric on other datasets with other protected attributes (e.g. race, nationality).",
"Finally, we can compare the metric across these datasets to compare trends across protected groups.",
"This work can be used to evaluate bias in models, and thus used to evaluate models serving human consumers.",
"As with all metrics, the metric does not capture all notions of bias, and thus should not be the only consideration for serving models.",
"While this is a valid risk, this is one that is not specific to prediction sensitivity.",
"Good use of this metric requires users to be cognizant of these strengths and weaknesses.",
"We also note that the metric requires defining protected attributes (e.g. gender) and our work carries the limitation that the selected datasets contain binary gender annotations.",
"Defining protected attributes may not always be possible and when possible, the protected attribute classes may not be comprehensive."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"method",
"method",
"method",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"objective",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Supervised learning has traditionally focused on inductive learning by observing labeled examples of a task.",
"In contrast, humans have the ability to learn new concepts from language.",
"Here, we explore learning zero-shot classifiers for structured data 1 purely from language from natural language explanations as supervision.",
"For this, we introduce CLUES , a benchmark for C lassifier L earning U sing natural language E xplanation S , consisting of a range of classification tasks over structured data along with natural language supervision in the form of explanations.",
"CLUES consists of 36 real-world and 144 synthetic classification tasks.",
"It contains crowdsourced explanations describing real-world tasks from multiple teachers and programmatically generated explanations for the synthetic tasks.",
"We also introduce ExEnt , an entailment-based method for training classifiers from language explanations, which explicitly models the influence of individual explanations in making a prediction.",
"ExEnt generalizes up to 18% better (relative) on novel tasks than a baseline that does not use explanations.",
"We identify key challenges in learning from explanations, addressing which can lead to progress on CLUES in the future.",
"Our code and datasets are available at: https: //clues-benchmark.github.io .",
"Humans have a remarkable ability to learn concepts through language (Chopra et al., 2019; Tomasello, 1999).",
"For example, we can learn about poisonous mushrooms through an explanation like a mushroom is poisonous if it has pungent odor' .",
"Such Equal contribution 1 By structured data, we refer to data that can be reasonably represented using tables.",
"This is a highly flexible format for representing a lot of real-world data (e.g., spreadsheets, traditional classification datasets in CSV format, single-table databases, as well as structured text-rich data such as emails), with a large variety in possible table schemas.",
"an approach profoundly contrasts with the predominant paradigm of machine learning, where algorithms extract patterns by looking at scores of labeled examples of poisonous and edible mushrooms.",
"However, it is unnatural to presume the availability of labeled examples for the heavy tail of naturally occurring concepts in the world.",
"This work studies how models trained to learn from natural language explanations can generalize to novel tasks without access to labeled examples.",
"While prior works in this area (Srivastava et al., 2017, 2018; Hancock et al., 2018; Murty et al., 2020; Andreas et al., 2018; Wang* et al., 2020; Ye et al., 2020; Zhou et al., 2020) have explored explanations as a source of supervision, they evaluate models on a small number of tasks (2-3 relation extraction tasks in (Hancock et al., 2018; Wang* et al., 2020; Murty et al., 2020; Zhou et al., 2020), 7 email categorization tasks (Srivastava et al., 2017)).",
"Owing to the paucity of large-scale benchmarks for 6523 learning from explanations over diverse tasks, we develop CLUES , a benchmark of classification tasks paired with natural language explanations.",
"Over the last few decades, researchers and engineers alike have put immense effort into constructing structured and semi-structured knowledge bases (e.g., structured tables on Wikipedia, e-commerce sites, etc.).",
"Developing models that can reason over structured data is imperative to improve the accessibility of machine learning models, enabling even non-experts to interact with such data.",
"Hence, in this work, we specifically formulate our classification tasks over structured data.",
"Our benchmark is divided into CLUES-Real and CLUES-Synthetic consisting of tasks from real-world (UCI, Kaggle, and Wikipedia) and synthetic domains respectively.",
"Explanations for CLUES-Real are crowdsourced to mimic the diversity and difficulty of human learning and pedagogy.",
"For CLUES-Synthetic , we generate the explanations programmatically to explicitly test models' reasoning ability under a range of structural and linguistic modifications of explanations.",
"We train models with a mix of explanations and labeled examples, in a multi-task setup, over a set of seen classification tasks to induce generalization to novel tasks, where we do not have any labeled examples.",
"Ye et al. (2021) refer to this problem setup as cross-task generalization\".",
"Some recent methods on cross-task generalization from language use instructions/prompts (Mishra et al., 2022; Sanh et al., 2022; Wei et al., 2021) describing information about what is the task?' to query large language models.",
"In contrast, language explanations in CLUES provide the logic for performing the classification task, or intuitively how to solve the task?' .",
"For the running example of mushroom classification, an instruction/prompt might be can you classify a mushroom with pungent odor as poisonous or edible?' .",
"On the other hand, an example of an explanation in CLUES is a mushroom is poisonous if it has pungent odor' .",
"We find that simply concatenating explanations to the input does not help pre-trained models, like RoBERTa (Liu et al., 2019), generalize to new tasks.",
"Thus, we develop ExEnt , an entailment-based model for learning classifiers guided by explanations, which explicitly models the influence of individual explanations in deciding the label of an example.",
"ExEnt shows a relative improvement of up to 18% over other baselines on unseen tasks.",
"To identify the challenges of learning from explanations, we perform extensive analysis over synthetic tasks.",
"Our analysis explores how the structure of an explanation (simple clauses vs. nested clauses) and the presence of different linguistic components in explanation (conjunctions, disjunctions, and quantifiers) affect the generalization ability of models.",
"The rest of the paper is structured as follows: we describe our crowdsourced-benchmark creation pipeline in 3.",
"In 4, we analyze our collected data.",
"In 5, we describe our models, experiments, and results.",
"We conclude with a brief discussion on the contributions and our findings, followed by a statement of ethics and broader impact.",
"Our contributions are: We introduce CLUES , a benchmark for learning classifiers over structured data from language.",
"We develop ExEnt , an entailment-based model for learning classifiers guided by explanations.",
"ExEnt shows a relative improvement of up to 18% over other baselines on generalization to novel tasks.",
"We explore the effect on the generalization ability of models learning from language by ablating the linguistic components and structure of explanations over our benchmark's synthetic tasks.",
"Learning concepts from auxiliary information: Prior work has explored techniques to incorporate side-information' to guide models during training (Mann and McCallum, 2010; Ganchev et al., 2010).",
"More recently, researchers have explored using language in limited data settings for learning tasks such as text classification (Srivastava et al., 2017, 2018; Hancock et al., 2018) and question answering (Wang* et al., 2020; Ye et al., 2020).",
"However, we diverge from these works by exploring the generalization ability of classifiers learned by using language over novel tasks as opposed to gauging performance only on seen tasks.",
"Explanation-based Datasets: The role of explanations and how they can influence model behavior is a widely studied topic in machine learning (Wiegreffe and Marasovic, 2021).",
"Among language-based explanation studies, past work has primarily developed datasets that justify individual predictions made by a model (also called, local explanations) (Rajani et al., 2019; Camburu et al., 2018), inter alia .",
"In contrast, our work focuses 6524 on explanations that define concepts and capture a broad range of examples rather than individual examples.",
"Our notion of explanations is shared with Andreas et al. (2018); Srivastava et al. (2017, 2018).",
"We differ from these works as (1) our benchmark comprises a large set of classification tasks spanning diverse concepts for learning from explanations as opposed to working on a limited set of tasks in prior work and (2) our benchmark is domain agnostic in the source of classification tasks considered as long as we can represent the inputs of the task in a tabular (structured) format.",
"Few-shot & Zero-shot learning: Large pretrained language models (LMs) (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020) have been shown to perform impressively well in few-shot settings (Brown et al., 2020; Lester et al., 2021).",
"Reformulating natural language tasks with patterns has been shown to boost few-shot learning ability for small language models as well (Schick and Schtze, 2021; Tam et al., 2021).",
"More recently, a few works have focused on evaluating the generalization of models to unseen tasks by using prompts and performing multi-task training (Mishra et al., 2022; Ye et al., 2021; Sanh et al., 2022; Min et al., 2021; Chen et al., 2022; Aghajanyan et al., 2021) While the training and evaluation setup is similar, our work is significantly different from these works as (1) the explanations in our work provide rationales for making a classification decision as opposed to explaining a task using prompts, (2) we explore classification over structured data as opposed to free-form text by designing a model that can leverage explanations.",
"In this section, we describe our benchmark creation process in detail.",
"In CLUES , we frame classification tasks over structured data represented in tabular format.",
"Based on the source of tables used to construct the classification tasks, we consider two splits of our benchmark, CLUES-Real (real-world datasets) and CLUES-Synthetic (synthetic datasets).",
"We first gather/create classification tasks from UCI, Kaggle, and Wikipedia tables, then collect explanations for each classification task.",
"Classification tasks from UCI and Kaggle.",
"UCI ML repository 2 and Kaggle 3 host numerous datasets for machine learning tasks.",
"For our benchmark, we pick out the tabular classification datasets.",
"Then, we manually filter the available datasets to avoid ones with",
"(a) many missing attributes and",
"(b) complex attribute names that require extensive domain knowledge making them unsuitable for learning purely from language.",
"CLUES-Real contains 18 classification tasks from UCI and 7 from Kaggle (the details of tasks are in Appendix B).",
"Mining tables from Wikipedia.",
"Wikipedia is a rich, free source of information readily accessible on the web.",
"Further, a lot of this information is stored in a structured format as tables.",
"We explore creating additional classification tasks based on tables from Wikipedia, where each row in a table is assigned a category label.",
"However, only a small fraction of the tables might be suitable to frame a classification task for our benchmark.",
"Thus, we need to identify suitable tables by mining a large collection of tables from Wikipedia (we use Wikipedia dump available on April 2021).",
"We formalize this mining-and-pruning process as a crowdsourcing task (on Amazon Mechanical Turk), where we present each turker with a batch of 200 tables and ask them to pick out suitable tables from that batch.",
"For a table considered suitable by a turker, we further ask the turker to mention which column of the table should be considered as providing the classification labels.",
"We identified 11 classification tasks corresponding to 9 Wikipedia tables after mining around 10K Wikipedia tables (the details of tasks are provided in Appendix B).",
"Our explanation collection process consists of two stages (1) teachers providing explanations after reviewing multiple labeled examples of the task, and (2) students verifying explanations and classifying new examples based on explanations for the tasks.",
"Collecting explanations : We use the Amazon Mechanical Turk (AMT) platform to collect explanations for CLUES-Real .",
"In each HIT, we provide turkers with a few labeled examples of a dummy task (each corresponding to a row in a table) and a set of good and bad explanations for the task to 2 https://archive.ics.uci.edu/ml/ 3 https://www.kaggle.com/datasets 6525 teach them about the expected nature of explanations.",
"Next, we test them on a qualification quiz' to gauge their understanding of good explanations.",
"Upon qualification, the turker advances to the explanation collection phase of the HIT.",
"At this stage, the turker is provided with 15-16 labeled examples of a task in CLUES-Real and we ask them to write explanations describing the logic behind the classification for each class.",
"Turkers are required to submit a minimum of two explanations ( 5 tokens each) for each task.",
"Further, teachers can test their understanding by taking a validation quiz, where they make predictions over new unlabeled examples from the task.",
"Based on their informed classification accuracy, teachers can optionally refine their explanations.",
"Finally, when turkers are content with their performance, they freeze' the explanations and advance to the test-quiz where they are evaluated on a new set of unlabeled examples from the task (dif-ferent from validation quiz).",
"4 We will refer to turkers who have provided responses at this stage as teachers' since they provide explanations to teach' models about different classification tasks.",
"Verification of explanations : After the explanation collection, we validate the utility of the sets of explanations for a task from each teacher by evaluating if they are useful they are for other humans in learning the task.",
"For this, a second set of turkers 5 is provided access to the collected explanations from a teacher for a task, but no labeled examples.",
"These turkers are then asked to predict the labels of test examples from the held-out test set, solely based on the provided explanations.",
"Additionally, we ask turkers in the verification stage to give a Likert rating (1-4 scale) on the usefulness of each explanation.",
"Since the turkers in the verification stage perform the classification task using language explanations from a teacher, we refer to them as students' for our setup.",
"Thus, the tasks in CLUES-Real contain explanations from multiple teachers and multiple students corresponding to a teacher.",
"This provides rich information about variance in teacher and student performance indicating how amenable different tasks are for learning via language.",
"We provide insights into the performance of teachers and students of our setup in 4.",
"The complexity and fuzziness of real-world concepts and the inherent linguistic complexity of crowdsourced explanations can often shroud the aspects of the task that make it challenging for models to learn from explanations.",
"To evaluate models in controlled settings where such aspects are not conflated, we create CLUES-Synthetic , a set of programmatically created classification tasks with varying complexity of explanations (in terms of structure and presence of quantifiers, conjunctions, etc.) and concept definitions.",
"We create tasks in CLUES-Synthetic by first selecting a table schema from a pre-defined set of schemas, then generating individual examples of the task by randomly choosing values (within a pre-defined range, obtained from schema) for each column of the table.",
"Next, we assign labels to each example by using a set of rules' for each task.",
"In this context, a rule' is a conditional statement (analogous to conditional explanations that we see for real-world tasks) used for labeling the examples.",
"We use the following types of rules that differ in structure and complexity ( c i denotes i th clause and l denotes a label): Simple: IF c 1 THEN l Conjunctive: IF c 1 AND c 2 THEN l Disjunctive: IF c 1 OR c 2 THEN l Nested disjunction over conjunction: IF c 1 OR ( c 2 AND c 3 ) THEN l Nested conjunction over disjunction: IF c 1 AND ( c 2 OR c 3 ) THEN l For each of the above, we include variants with negations (in clauses and/or labels): Some examples IF c 1 THEN NOT l , IF c 1 OR NOT c 2 THEN l We also consider other linguistic variations of rules by inserting quantifiers (such as always', likely').",
"The synthetic explanations are template-generated based on the structure of the rules used in creating 6526 Vocabulary 1026 Avg.",
"the task.",
"For brevity, we defer additional details on the use of quantifiers, label assignment using rules, and creation of synthetic explanations to Appendix A. Overall we have 48 different task types (based on the number of classes and rule variants) using which we synthetically create 144 classification tasks (each containing 1000 labeled examples).",
"Task Statistics : Table 1 shows the statistics of tasks in CLUES .",
"The real-world tasks in our benchmark are from a wide range of domains, such as data corresponding to a simple game (e.g. tic-tac-toe), medical datasets (e.g. identifying liver pa-tients), merit-classification of teachers and students, network-related datasets (eg. internet-firewall), among others.",
"The synthetic tasks are created using table schemas denoting different domains, such as species of animals, species of birds, etc. (details in Appendix A).",
"As seen in Table 1, 5.4 explanation sets were collected for each classification task from human teachers on average.",
"Further, each explanation set was verified by 3 students during the verification task.",
"An aggregate of 133 teachers provide 318 explanations for tasks in CLUES-Real .",
"All collected explanations were manually filtered and irrelevant explanations were removed.",
"Lexical analysis of explanations : Table 2a shows the statistics for explanation texts in our dataset.",
"6 We evaluate the average length of the explanation texts, vocabulary size and number of unique bigrams present in the explanations.",
"Explanation characteristics : Following Chopra et al. (2019), we categorize the explanations based on the different aspects of language (generics, quantifiers, conditional, and negation) present in these explanations.",
"Table 3 shows the statistics of various categories in our dataset.",
"Note that an explanation might belong to more than one category (for example, an example like if the number of hands equal 6 Statistics in Table 2a was obtained using the spacy tok-enizer. CATEGORYEXAMPLEREALSYN Generic Being over 50 increases the risk of a stroke. 48 % 50 % Quantifier ... usually means you won't have heart disease. 52 % 50 % Conditional If color code ... , then ... 15 % 100 % Negations ... is not low. 16 % 50% Table 3: Count of explanations in our dataset based on various aspects of language present in them to 2, then it is usually foo\" , will be categorized both as having both conditional and quantifiers.) We found that around 52% of the explanations for the real-world tasks had quantifiers (such as some', majority', most', etc.) in them.",
"A full list of quantifiers present in the data is given in Appendix A. Reading complexity : We analyze the reading complexity of crowdsourced explanations by using Flesch reading ease 7 .",
"Reading complexity values for our crowdsourced explanations vary from 3.12 (professional grade reading level) to 106.67 (easier than 3rd-grade reading level), with a median value of 65.73 (8th/9th-grade reading level).",
"Usefulness of the explanations : During the validation stage, we ask the turkers to provide a rating (on a Likert scale from 1 to 4) on the utility of the explanations for classification.",
"The semantics of ratings are, 1 not helpful', 2 seems useful', 3 helped in predicting for 1 sample', and 4 mostly helpful in prediction'.",
"The average rating for the explanations in CLUES-Real is 2.78, denoting most explanations were useful, even if they did not directly help predict labels in some cases.",
"In Figure",
"2(a), we also provide a histogram of the Likert ratings provided by the students.",
"Characteristics of teachers and students : Figure",
"2(b) shows the normalized teacher performance vs normalized student performance for teacher-student pairs in CLUES-Real .",
"Normalized performance of an individual teacher (or, student) on a task is defined as the difference between the performances of the teacher (or, student) and an average teacher (or, student) for the same task.",
"The positive correlation ( = 0.17) suggests that students tend 7 https://en.wikipedia.org/wiki/Flesch_ Kincaid_readability_tests 6527 1 2 3 4 Rating 0 100 200 300 F r e q u e n c y Likert Ratings of Explanations",
"to perform well if taught by well-performing teachers.",
"Positive correlation ( = 0.48) in Figure",
"2(c), indicates that task difficulty (captured by classification accuracy) is well-correlated for a teacher and student on average.",
"On visualizing the difference between an average student and an average teacher performance for each task in CLUES-Real , we find that an average teacher performs better than the average student on most tasks.",
"However, for the tic-tac-toe' task in CLUES-Real , we find that the student accuracy was around 13% higher than average teacher performance.",
"We hypothesize that this task can be solved by commonsense reasoning without relying on the provided explanations, resulting in students performing better than teachers.",
"We quantify the average performance of teachers and students on CLUES-Real in Table 4.",
"8 We find that students per-8 Note that teacher scores in the tables and figures do not include 9 Wikipedia Tasks for which the authors formed the form lower than teachers on average as expected since a teacher has more expertise in the task.",
"Moreover, it is challenging to teach a task perfectly using explanations in a non-interactive setting where a student cannot seek clarifications.",
"Additional data analysis and details of HIT compensation can be found in Appendix C and D. 5 Experiment Setup and Models In this section, we describe our training and evaluation setup, our models, and experimental findings.",
"Our goal is to learn a model that, at inference, can perform classification over an input x to obtain the class label y , given the set of explanations E for the classification task.",
"Figure 4 shows our setup, where we train our model using multi-task training over a set of tasks T seen and evaluate generalization to a new task, t T novel .",
"The task split we use for our experiments can be found in Appendix E.1.",
"We select our best model for zero-shot evaluation based on the validation scores on the seen tasks.",
"Since we do not make use of any data from the novel tasks to select our best model, we maintain the true zero-shot setting (Perez et al., 2021).",
"We encode each structured data example, x , as a text sequence, by linearizing it as a sequence of attribute-name and attribute-value pairs, separated by [SEP] tokens.",
"To explain, the leftmost attribute-name and attribute-value pair of structured input example in Figure 1 is represented as odor | pungent' .",
"The linearization allows us to make use of pre-trained language models for the classification task.",
"Our linearization technique explanations.",
"is similar to the one used in Yin et al. (2020) with the exception that we do not use the column type.",
"We will refer to the linearized format of structured inputs by Features-as-Text' or FaT'.",
"For our baselines, we make use of a pre-trained RoBERTa model (Liu et al., 2019).",
"However, RoBERTa with the standard-fine-tuning approach cannot allow a generalization test as the number of output classes varies for each task.",
"Furthermore, we cannot train individual class heads at inference since we test zero-shot .",
"Hence, we make the following modifications to make RoBERTa amenable for zero-shot generalization tests: a pre-trained RoBERTa model takes the linearized structured data (FaT) as input and outputs a representation for this context (in the [CLS] token).",
"Next, we run another forward pass using RoBERTa to obtain a representation of the labels based on their text (e.g., poisonous' or edible' for our example in Figure 1).",
"Finally, we compute the probability distribution over labels by doing a dot-product of the representations of the input and the labels.",
"We train this model using cross-entropy loss.",
"In our experiments, we refer to this model as RoBERTa w/o Exp since the model does not use any explanations.",
"We also experiment with a RoBERTa w/ Exp. model where a RoBERTa model takes as input a concatenated sequence of all the explanations for the task along with FaT.",
"The rest of the training setup remains the same as RoBERTa w/o Exp.",
"We find that a simple concatenation of explanations is not helpful for zero-shot generalization to novel tasks (results in Figure 6).",
"Next, we describe ExEnt which explicitly models the role of each explanation in predicting the label for an example.",
"To model the influence of an explanation towards deciding a class label, we draw analogies with the entailment of an explanation towards the structured input.",
"Here, given a structured input ( premise ) and an explanation ( hypothesis ), we need to decide whether the explanation strengthens the belief about a specific label ( entailment ), weakens belief about a specific label ( contradiction ) or provides no information about a label ( neutral ).",
"Figure 5 shows the overview of our explanation-guided classification model, ExEnt ; given a structured input and explanation of a task, let l exp denote the label mentioned in the explanation, and L denote the set of labels of the task.",
"The entailment model assigns logits p e , p c and p n to the hypothesis being entailed, contradicted or neutral respectively w.r.t. the premise.",
"Based on the label assignment referred to by an explanation, we assign logits to class labels as follows: If explanation mentions to assign a label : Assign p e to l exp , p c is divided equally among labels in L \\ { l exp } , and p n is divided equally among labels in L .",
"If explanation mentions to not assign a label : This occurs if a negation is associated with l exp .",
"Assign p c to l exp , p e is divided equally among labels in L \\ { l exp } , and p n is divided equally among labels in L .",
"We obtain logit scores over labels of the task corresponding to each explanation as described above.",
"We compute the final label logits by aggregating (using mean) over the label logits corresponding to each explanation of the task.",
"The final label logits are converted to a probability distribution over labels, and we train ExEnt using cross-entropy loss.",
"In experiments, we consider a pre-trained RoBERTa model fine-tuned on MNLI (Williams et al., 2017) corpus as our base entailment model.",
"9 Further, in order to perform the assignment of logits using an explanation, we maintain meta-information for each explanation to (1) determine if the explanation mentions to assign' a label or not assign' a label, and (2) track l exp (label mentioned in explanation).",
"For CLUES-Synthetic , we parse the templated explanations to obtain the 9 Weights link: https://huggingface.co/ textattack/roberta-base-MNLI 6529 MLM Exp.",
"meta-information, while for the explanations in CLUES-Real , the authors manually annotate this meta-information.",
"Additional training details and hyperparameters are provided in Appendix E. 5.4 Zero-Shot Generalization Performance We evaluate ExEnt and the baselines on zero-shot generalization to novel tasks in our benchmark as described in 5.1.",
"We train separate models for CLUES-Real and CLUES-Synthetic .",
"Figure 6 shows the generalization performance of all models.",
"On CLUES , we find that ExEnt outperforms the baselines suggesting that performing entailment as an intermediate step helps aggregate information from multiple explanations better.",
"On CLUES-Real , ExEnt gets an 18% relative improvement over the baselines while having an 11% relative improvement on CLUES-Synthetic To evaluate the utility of our synthetic tasks in enabling transfer learning to real-world tasks, we finetune a ExEnt model pre-trained on synthetic tasks.",
"We experiment with three pre-training task sets -CLUES-Synthetic , CLUES-Synthetic (3x) and CLUES-Synthetic (5x) consisting of 144, 432, and 720 tasks.",
"These larger synthetic task sets are created by sampling tasks from each of the 48 different synthetic tasks types similar to how CLUES-Synthetic was created (see 3.2 for refer-ence).",
"We find that pre-training on synthetic tasks boosts the performance of ExEnt on the novel tasks of CLUES-Real by up to 39% (relative) over the RoBERTa w/o Exp. model.",
"Human Performance To situate the performance of the automated models, we performed human evaluation for tasks in test split of CLUES-Real using AMT.",
"For this, we sampled at most 50 examples 10 from the test split of tasks in CLUES-Real and each example was labeled' by 2 turkers using the explanations of the best teacher' (the teacher whose students got the best performance during explanation verification' stage; see 3.1.2 for reference).",
"The average human accuracy for this was about 70%.",
"However, the performance numbers of humans and models are not directly comparable as the model looks at all the explanations for the task, whereas the humans observe a small number of explanations.",
"Humans also see multiple examples of the task during the evaluation, which they can use to fine-tune their understanding of a concept.",
"The automated models don't have a mechanism to leverage such data.",
"To identify key challenges in learning from explanations, we perform experiments ablating the linguistic components and structure of explanations.",
"For a robust analysis, we generate more tasks for each task type in CLUES-Synthetic , making 100 tasks for each of the 48 different task-types in CLUES-Synthetic (axes of variation include 4 negation types, 3 conjunction/disjunction types, 2 10 Many tasks (such as tasks created from Wikipedia tables) have less than 50 examples in their test",
"split.) 6530 noneg.",
"Appendix A.5).",
"We evaluate the generalization performance of ExEnt to novel tasks on each of the different types separately by training separate models for each task type.",
"Figure 7 shows the relative gain in generalization performance of models learned using explanations compared to the performance of baseline RoBERTa w/o Exp. 11 Our results indicate that learning from explanations containing quantifiers is highly challenging.",
"In the presence of quantifiers, models guided by explanations perform on par with the baseline RoBERTa w/o Exp model.",
"Negations also pose a challenge, as indicated by the decline in relative gains of models guided by explanation compared to the RoBERTa w/o Exp model.",
"Structurally complex explanations (containing conjunc-tions/disjunctions of clauses) are also hard to learn from compared to simple conditional statements.",
"These challenges provide a fertile ground for future research and improvements.",
"We have introduced CLUES , a benchmark with diverse classification tasks over structured data along with natural language explanations to learn them.",
"CLUES is agnostic in the domain of tasks allowing the research community to contribute more tasks in the future.",
"We also present ExEnt , an entailment-based model to learn classifiers guided by explanations.",
"Our results are promising and indicate that explicitly modeling the role of each explanation through entailment can enable learning classifiers for new tasks from explanations alone.",
"Future work can explore the open challenges in learning from explanations, such as modeling the influence of quantifiers and negations present in an explanation.",
"Our empirical analyses here aggregates explana-11 Accuracies have been averaged over the multi-class and binary datasets since the trends remain the same across both.",
"tions for a task from multiple teachers.",
"Future work can explore learning from explanations from individual teachers, as well as cross-teacher variance.",
"Alternatively, rather than treat explanations from different teachers homogeneously, future work can model trustworthiness of a crowd of teachers from their provided explanations.",
"All tables in CLUES-Real were collected from free public resources (with required attributions) and tables in CLUES-Synthetic were created by us programmatically.",
"We do not collect any personal information from the turkers who participated in our crowdsourced tasks.",
"The dataset has been released without mentioning any personal details of turkers available automatically in AMT (such as turker IDs).",
"The turkers were compensated fairly and the payment per task is equivalent to an hourly compensation that is greater than minimum wage (based on the median time taken by turkers).",
"We provide details of the reward structure for the crowdsourcing tasks in Appendix D. For the Wikipedia mining task in this work, we limited the locale of eligible turkers to US, UK, New Zealand and Australia.",
"For other crowdsourcing tasks, we limited the locale of eligible turkers to US.",
"Further, to ensure good-faith turkers, we required that the approval rate of the turkers be above 98%.",
"Our screening process has selection biases that likely over-samples turkers from demographics that are over-represented on AMT (ethnically white, college-educated, lower-to-medium income and young) (Hitlin, 2016), and this is likely to affect the type of language usage in the collected explanations.",
"The broader impact of this research in the longer term could make developing predictive technologies more accessible to ordinary users, rather than data-scientists and experts alone."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"method",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"objective",
"other",
"other",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"Algorithmic approaches to interpreting machine learning models have proliferated in re-cent years.",
"We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability , while avoiding important confounding experimental factors.",
"A model is simulatable when a person can predict its behavior on new inputs.",
"Through two kinds of simulation tests involving text and tabular data, we evaluate five explanations methods: (1) LIME, (2) Anchor, (3) Decision Boundary, (4) a Prototype model, and (5) a Composite approach that combines explanations from each method.",
"Clear evidence of method effectiveness is found in very few cases: LIME improves simulatability in tabular classification, and our Prototype method is effective in counterfactual simulation tests.",
"We also collect subjective ratings of explanations, but we do not find that ratings are predictive of how helpful explanations are.",
"Our results provide the first reliable and comprehensive estimates of how explanations influence simulatability across a variety of explanation methods and data domains.",
"We show that (1) we need to be careful about the metrics we use to evaluate explanation methods, and (2) there is significant room for improvement in current methods.",
"1 1 Introduction Interpretable machine learning is now a widely discussed topic (Rudin, 2019; Doshi-Velez and Kim, 2017; Lipton, 2016; Gilpin et al., 2018).",
"While survey papers have not converged on definitions of explainable or interpretable, there are some common threads in the discourse.",
"Commentators observe that interpretability is useful for 1 We make all our supporting code, data, and models publicly available at: https://github.com/peterbhase/ InterpretableNLP-ACL2020 achieving other model desiderata, which may include building user trust, identifying the influence of certain variables, understanding how a model will behave on given inputs, and ensuring that models are fair and unbiased.",
"In their review, Doshi-Velez and Kim (2017) outline an approach to measuring interpretability.",
"They describe two human-subject tasks that test for a particularly useful property: simulatability .",
"A model is simulatable when a person can predict its behavior on new inputs.",
"This property is especially useful since it indicates that a person understands why a model produces the outputs it does.",
"The first of the two tasks is termed forward simulation : given an input and an explanation, users must predict what a model would output for the given input.",
"The second is counterfactual simulation : users are given an input, a model's output for that input, and an explanation of that output, and then they must predict what the model will output when given a perturbation of the original input.",
"The explanation itself is algorithmically generated by a method for interpreting or explaining a model.",
"Simulation tests have been carried out before, but no study to date has isolated the effect of explanations on simulatability (Ribeiro et al., 2018; Chandrasekaran et al., 2018; Nguyen, 2018; Bang et al., 2019).",
"We carry out simulation tests that are the first to incorporate all of the following design choices: (1) separating explained instances from test instances, so explanations do not give away the answers, (2) evaluating the effect of explanations against a baseline of unexplained examples, (3) balancing data by model correctness, so users cannot succeed by guessing the true label, and (4) forcing user predictions on all inputs, so performance is not biased toward overly specific explanations.",
"We display our study design in Figure",
"1. We provide results from high-quality human (Post) Prediction Phase Learning Phase (w/ explanations) Learning Phase (Pre) Prediction Phase SimulationForward SimulationCounterfactual (Pre) Prediction Phase : Human simulation : Model prediction : Explanation : Counterfactual input : Counterfactual model prediction (Post) Prediction Phase Explanation Effect Post Sim.",
"user tests (with over 2100 responses) that include both forward and counterfactual simulation tasks.",
"Through these tests, we measure explanation effectiveness for five methods across text and tabular classification tasks.",
"Our evaluation includes two existing explanation techniques, LIME and Anchor (Ribeiro et al., 2016, 2018), and we translate two other explanation methods from image recognition models to work with our textual and tabular setups.",
"The first of these is a latent space traversal method, which we term the Decision Boundary approach (Joshi et al., 2018; Samangouei et al., 2018), and the second is a case-based reasoning method, which we term the Prototype method (Chen et al., 2019).",
"The final method is a novel Composite approach that combines complementary explanations from each method.",
"Lastly, we also collect subjective, numerical user ratings of explanation quality.",
"Our key findings are:",
"1. LIME improves forward and counterfactual simulatability in our tabular classification task.",
"2. Prototype improves counterfactual simulatability across textual and tabular data domains.",
"3. No method definitively improves forward and counterfactual simulatability together on the text task, though our Prototype and Composite methods perform the best on average.",
"4. It appears that users' quality ratings of explanations are not predictive of how helpful the explanations are with counterfactual simulation.",
"5. While users rate Composite explanations as among the best in quality, these combined explanations do not overtly improve simulatability in either data domain.",
"Survey papers use key terms in varying ways.",
"Rudin (2019) draws a distinction between interpretability and explainability, suggesting that a model is interpretable if it performs computations that are directly understandable.",
"Post-hoc explanations, on the other hand, are potentially misleading approximations of the true computations.",
"Gilpin et al. (2018) also distinguish between the two concepts, though they define them differently.",
"In this paper, we do not distinguish between interpretability and explainability.",
"Rather, we adopt the conceptual framework of Doshi-Velez and Kim (2017), who consider interpretability in terms of downstream desiderata one can assess models with respect to.",
"Our terminology is as follows: we will say that explanation methods may improve the interpretability of a model, in the sense that an interpretable model is simulatable .",
"Several taxonomies have been proposed for categorizing methods for interpretability.",
"We organize methods below into the categories of: feature importance estimation, case-based reasoning, and latent space traversal.",
"Feature Importance Estimation.",
"Feature importance estimates provide information about how the model uses certain features.",
"Most prominent among these methods are the gradient-based approaches first introduced for vision by Simonyan et al. (2014), which Li et al. (2016) show may be translated for use with text data.",
"These approaches have since been demonstrated to sometimes behave in counterintuitive ways (Adebayo et al., 2018; Kim et al., 2018).",
"A number of alternative methods have been proposed for quantifying feature importance across data domains (Kim et al., 2018; Lundberg and Lee, 2017; Sundarara-jan et al., 2017).",
"In our study, we choose to evaluate two domain-agnostic approaches, LIME and Anchor (Ribeiro et al., 2016, 2018).",
"These methods use simple models, i.e. sparse linear models and rule lists, to approximate complex model behavior locally around inputs.",
"They show the estimated effects of directly interpretable features on the model's output.",
"For these methods, what is local to an input is defined in a domain-specific manner via a perturbation distribution centered on that input.",
"Case-based Reasoning.",
"Prototype models classify new instances based on their similarity to other known cases.",
"Two works on prototype models for computer vision introduced neural models that learn prototypes corresponding to parts of images (Chen et al., 2019; Hase et al., 2019).",
"These prototypes are used to produce classifier features that are intended to be directly interpretable.",
"Latent Space Traversal.",
"These methods traverse the latent space of a model in order to show how the model behaves as its input changes.",
"In a classification setting, crossing the decision boundary may reveal necessary conditions for a model's prediction for the original input.",
"Several methods exist for vision models (Joshi et al., 2018; Samangouei et al., 2018).",
"To our knowledge no such approach exists for discriminative models of text and tabular data, so we develop a simple method for these kinds of models (described in Section 3.4).",
"Here we discuss works involving automatic and human evaluations of interpretability, as well as how we improve on past simulation test design.",
"While human evaluations are useful for evaluating many aspects of interpretability, we restrict our discussion to works measuring simulatability.",
"Improving Forward Test Design.",
"Forward simulation tasks have been implemented in many different forms, and there is a serious need for consensus on proper procedure here.",
"Doshi-Velez and Kim (2017) originally propose that users predict model behavior, given an input and an explanation.",
"With many explanation methods, this is a trivial task because the explanations directly reveal the output .",
"For example, LIME gives a predicted probability that indicates the model behavior with high likelihood.",
"We make a number of experimental design choices that give us more reliable estimates of method effectiveness than past studies.",
"(1) We separate the explained instances from the test instances, to prevent explanations from giving away the answers.",
"In three studies, the same data points were used as both explanation and prediction items (Nguyen, 2018; Chandrasekaran et al., 2018; Bang et al., 2019).",
"(2) We evaluate the effect of explanations against a baseline where users see the same example data points without explanations.",
"No prior evaluation includes this control.",
"(3) Two choices further distinguish our test from that of Ribeiro et al. (2018).",
"We balance data by model correctness, so users cannot succeed simply by guessing the true label, and we force user predictions on every input, so our metrics do not favor overly niche explanations.",
"Counterfactual Simulatability.",
"Counterfactual simulatability has, to our knowledge, never been measured for machine learning models.",
"While Doshi-Velez and Kim (2017) propose asking users to edit inputs in order to change the model outputs, we instead ask users to predict model behavior on edited versions of data points, as this approach is more scalable than soliciting creative responses.",
"Relation to Automatic Tests.",
"Prior works have proposed automatic metrics for feature importance estimates (Nguyen, 2018; Hooker et al., 2019; DeYoung et al., 2020).",
"Typically these operate by checking that model behavior follows reasonable patterns on counterfactual inputs constructed using the explanation, e.g., by masking impor-tant features and checking that a class score drops.",
"Whereas automatic metrics define appropriate model behavior in advance for counterfactual instances generated by a fixed schema, we present a random counterfactual to a human and elicit their prediction of model behavior for that instance.",
"This allows for human validation of model behavior in a broader range of input scenarios than an automatic procedure, where human expectations are given in response to diverse and concrete examples rather than dictated in advance.",
"Subjective Ratings.",
"Hutton et al. (2012) measure user judgments of whether word importance measures explain model behavior in a text classi-LIME 0 1 +.05+.04-.06-.11-.18 .24 -.02 -.26 charmsmodest dismissedoccasionaldespite Sum of WordsBaseline Est.",
"fication setting.",
"Our rating task is thus similar to theirs; our changes are that we evaluate with a Likert scale rather than forced ranking, using explanation techniques for neural models rather than word importance estimates from a naive Bayes classifier.",
"In another study, users judged image classification explanations on a Likert scale ranging from no explanation to concise explanation (Bang et al., 2019).",
"Whereas this scale focuses on conciseness, we ask users to rate how explanations reveal reasons for model behavior.",
"In this section, we describe the explanation methods.",
"Example explanations for a test movie review are shown in Figure",
"2. We limit our discussion of LIME and Anchor, since details for these methods can be found in the original papers.",
"Note that LIME, Anchor, and our Decision Boundary method can be used with arbitrary blackbox models.",
"The Prototype method is itself a neural model that also produces an explanation.",
"Ribeiro et al. (2016) present LIME as a local linear approximation of model behavior.",
"With a user-specified feature space, a linear model is fit to the blackbox outputs on samples from a distribution around an input.",
"We set the number of features to use to 5, and we take class probabilities as our model output.",
"When showing LIME explanations to users, we give them the selected features with estimated weights, the model intercept, the sum of model weights, and the predicted model output.",
"Ribeiro et al. (2018) introduce a method for learning rule lists that predict model behavior with high confidence.",
"With samples from a distribution around an input, they use a PAC learning approach to obtain a rule list.",
"When the rules apply to an input, there is a high probability it will receive the same prediction as the original.",
"The feature space of the rule list is specified by the user.",
"As in the original work, we use individual tokens for our text data, and we use the same learning parameters for each Anchor explanation.",
"Prototype models have previously been used for interpretable computer vision (Chen et al., 2019; Hase et al., 2019).",
"We develop a prototype model for use with text and tabular classification tasks.",
"In our model, a neural network g maps inputs to a latent space, and the score of class c is: f ( x i ) c = max p k P c a ( g ( x i ) , p k ) where a is a similarity function for vectors in the latent space, and P c is the set of protoype vectors for class c .",
"We choose the Gaussian kernel for our similarity function: a ( z i , p k ) = e || z i p k || 2 .",
"The model predicts inputs to belong to the same class as the prototype they're closest to in the latent space.",
"Unlike in Chen et al. (2019), we take the max activation to obtain concise explanations.",
"In lieu of image heatmaps, we provide feature importance scores.",
"What distinguishes these scores from those of standard feature importance estimates is that the scores are prototype-specific, rather than class-specific.",
"We choose a feature omission approach for estimation.",
"With text data, omission is straightforward: for a given token, we take the difference in function output between the original input and the input with that token's embedding zeroed out.",
"In the tabular domain, however, variables can never take on meaningless values.",
"To circumvent this problem, we take the difference between the function value at the original input and the expected function value with a particular feature missing.",
"The expectation is computed with a distribution over possible values for a missing feature, which is provided by a multinomial logistic regression conditioned on the remaining covariates.",
"When presenting prototype explanations, we provide users with the predicted class score, most similar prototype, and top six feature importance scores, provided that score magnitudes meet a small threshold.",
"In the explanation in Figure 2, no scores meet this threshold.",
"We set the size of P c to 40 for our text classification task and 20 for our tabular classification task.",
"For further training and feature importance details, see the Appendix.",
"Joshi et al. (2018) and Samangouei et al. (2018) introduce techniques for traversing the latent spaces of generative image models.",
"Their methods provide paths that start at input data points and cross a classifier's decision boundary.",
"Such methods may help users see the necessary conditions for the model prediction.",
"We provide a simple method for traversing the latent space of a discriminative classifier (see example in Figure 2).",
"Our algorithm first samples around the original input to get instances that cross the decision boundary.",
"A counterfactual input is chosen from these by taking the instance with the fewest edited features (tokens or variables), while breaking ties using the Euclidean distance between latent representations.",
"Lastly, we provide a path between inputs by greedily picking the edit from the remaining edits that least changes the model's evidence margin, which is the difference between positive and negative class scores.",
"The explanations we present to users include the input, steps to the counterfactual input, and evidence margin at each step.",
"When the path is longer than four steps, we show only the last four.",
"We hypothesize that the above explanations provide complementary information, since they take distinct approaches to explaining model behavior.",
"Hence, we test a Composite method that combines LIME and Anchor with our decision boundary and prototype explanations.",
"We make two adjustments to methods as we combine them.",
"First, we show only the last step of each decision boundary explanation, i.e., the set of changes that flips the prediction.",
"Second, we train our prototype model with its feature extraction layers initialized from the neural task model and thereafter fixed.",
"We do so since we are interested in explaining the task model behavior, and this tactic yields prototypes that reflect characteristics of the task model.",
"In this section, we describe our datasets, task models, user pool, and experimental design.",
"We perform experiments for classification tasks with text and tabular data.",
"The first dataset consists of movie review excerpts (Pang et al., 2002).",
"The dataset includes 10,662 reviews with binary sentiment labels, which we split into partitions of 70%, 10%, and 20% for the train, validation, and test sets, respectively.",
"We use the same neural architecture as in Yang et al. (2016), limited to use with single sentences.",
"The second dataset is the tabular Adult data from the UCI ML repository (Dua and Graff, 2017).",
"This dataset contains records of 15,682 individuals, and the label is whether their annual income is more than $50,000.",
"We use the same data processing scheme and neural network architecture as Ribeiro et al. (2018).",
"Model accuracies are given in the Appendix.",
"We gathered over 2100 responses via in-person tests with 32 trained undergraduates who had taken at least one course in computer science or statistics.",
"2 Each user was randomly assigned to one of the ten conditions corresponding to our dataset-method pairs.",
"Once each condition had at least 3 full tests collected, we allocated remaining participants to the Composite method.",
"In order to ensure high quality data, we employed a screening test to check for user understanding of their explanation method and test procedure.",
"Two participants were screened out due to low scores.",
"We also excluded data from a user whose task completion time was extremely low.",
"We paid all users $15 USD per hour.",
"Ten users were tested again with a new dataset and explanation method, giving us a total of 39 user tests.",
"Some users had to exit the experiment before finishing all of the tasks; 2 We require this advanced background because explanations rely on conditional probabilities, approximations of probabilities, and other quantitative concepts.",
"for data analysis purposes, we consider only task items answered in both Pre and Post test phases.",
"Forward Simulation.",
"This test is represented in Figure",
"1. The test is split into four phases: a learning phase, a Pre prediction phase, a learning phase with explanations , and a Post prediction phase.",
"To begin, users are given 16 examples from the validation set with labels and model predictions but no explanations.",
"Then they must predict the model output for either 16 or 32 new inputs, with the number chosen based on user time constraints.",
"Users are not allowed to reference the learning data while in prediction phases.",
"Next, they return to the same learning examples, now with explanations included.",
"Finally, they predict model behavior again on the same instances from the first prediction round.",
"By design, any improvement in user performance in the Post prediction phase is attributable only to the addition of explanations.",
"We show a screenshot of the user testing interface in the Appendix.",
"Counterfactual Simulation.",
"Represented in Figure 1, this test requires users to predict how a model will behave on a perturbation of a given data point.",
"The test consists of Pre and Post prediction rounds, where the only difference between them is the addition of explanations.",
"In both rounds, we provide users with the same 32 inputs from the test dataset (or 16 due to time constraints), their ground truth labels, the model's prediction, and a perturbation of the input.",
"See the Appendix for a description of the perturbation generation algorithm.",
"Users then predict model behavior on the perturbations.",
"In the Post round, users are given the same data, but they are also equipped with explanations of the model predictions for the original inputs.",
"Therefore, any improvement in performance is attributable to the addition of explanations.",
"Data Balancing.",
"One critical aspect of our experimental design is our data balancing.",
"We aim to prevent users from succeeding on our tests simply by guessing the true label for every instance.",
"To do so, we ensure that true positives, false positives, true negatives, and false negatives are equally represented in the inputs.",
"Likewise, for the counterfactual test, we sample perturbations such that for any instance, there is a 50% chance that the pertur-Text Ratings Tabular Ratings Method n CI n CI LIME 144 4 .",
"bation receives the same prediction as the original input.",
"We confirm user understanding of the data balancing in our screening test.",
"Data Matching.",
"Within each data domain, all users receive the same data points throughout the experiment.",
"This design controls for any differences in the data across conditions and users, though this does reduce the information added by each test, making our confidence intervals relatively wide given the same sample size.",
"We also match data across prediction rounds in order to control for the influence of particular data points on user accuracy between the Pre and Post phases.",
"Users see explanations in two phases of the tests: the second learning phase in the forward test, and the Post phase of the counterfactual test.",
"In these stages, we ask users to give subjective judgments of the explanations.",
"They rate each method on a 7 point Likert scale, in response to the question, Does this explanation show me why the system thought what it did?",
"We explain that users should give higher ratings when the explanation shows the reasons for a model prediction, regardless of whether or not the prediction is correct.",
"We report data from a total of 2166 responses from 39 user tests.",
"Each test is for a method and data domain pair, and contains either 16 or 32 task items, with some missingness due to users exiting the study early.",
"In the results to follow, we use the term Change to refer to our estimate of explanation effectiveness: the difference in user accuracy across prediction phases in simulation tests.",
"We perform two-sided hypothesis tests for this quantity by a block bootstrap, resampling both users and unique task items within each condition (Efron and Tibshirani, 1994).",
"In addition, since users complete the first prediction round in either simulation test without access to explanations, we estimate the mean Pre accuracy for each method with a random effects model.",
"This allows us to share information across methods to yield more precise estimates of test performance.",
"Below, we analyze our experimental results and answer three questions: 1) Do explanations help users?",
"2) How do users rate explanations?",
"3) Can users predict explanation effectiveness?",
"We show simulation test results in Tables 1 and",
"2. In Table 1, we group results by data domain, and in Table 2, we group results by test type.",
"Our principal findings are as follows:",
"1. LIME with tabular data is the only setting where there is definitive improvement in forward and counterfactual simulatability.",
"With no other method and data domain do we find a definitive improvement across tests.",
"2. Even with combined explanations in the Composite method, we do not observe definitive effects on model simulatability.",
"3. Interestingly, our prototype method does reliably well on counterfactual simulation tests in both data domains, though not forward tests.",
"It may be that the explanations are helpful only when shown side by side with inputs.",
"These results suggest that: (1) many explanation methods may not noticeably help users understand how models will behave, (2) methods that are successful in one domain might not work equally well in another, (3) combining information from explanations does not result in overt improvements in simulatability.",
"Yet, given our wide confidence intervals, these results should be considered cautiously.",
"It may also be that other methods do in fact improve simulatability, but we have not precisely estimated this.",
"For example, our Prototype and Composite methods do the best on average with text data, though we cannot be confident that they improve simulatability.",
"Note that estimates of explanation effectiveness could be influenced by users simply regressing to the mean accuracy between prediction rounds.",
"We find that our primary results are not skewed by this phenomenon: the highest estimates of Change in each data domain and test type come from conditions where mean Pre test performance was either above the overall mean or, in one case, within 1.15 percentage points.",
"This potential problem is further mitigated by our random effects model of Pre test performance, which pulls low Pre test means toward the overall mean.",
"It seems that, as intended, users rated explanations based on quality rather than model correctness, as we observe no significant difference in ratings grouped by model correctness (table in Appendix).",
"In Table 3, we show user ratings for each method and data domain.",
"We observe that: 1) ratings are generally higher for tabular data, relative to text data, 2) the Composite and LIME methods receive the highest ratings in both domains, and 3) variance in explanation ratings is quite high, relative to their scale.",
"We answer this question by measuring how explanation ratings relate to user correctness in the Post phase of the counterfactual simulation test.",
"In this phase, users rate explanations of model predictions for an original input and predict model behavior for a perturbation of that input.",
"If ratings of explanation quality are a good indicator of their effectiveness, we would expect to see that higher ratings are associated with user correctness.",
"We do not find evidence that explanation ratings are predictive of user correctness.",
"We estimate the relationship via logistic regression with user correctness and ratings.",
"We test models with both absolute ratings and ratings normalized within users, since ratings lack an absolute scale between users.",
"With 640 text data points, we estimate with 95% confidence that moving from a rating of 4 to 5 is associated with between a 2 .",
"9 and 5 .",
"2 percentage point change in expected user correctness.",
"Using normalized ratings, we find that moving from the mean explanation rating to the first standard deviation is associated with between a 3 .",
"9 and 12 .",
"2 percentage point change.",
"With 515 tabular data points, we estimate that a change in rating from 4 to 5 is associated with between a 2 .",
"6 and 5 .",
"3 percentage point change in expected user correctness.",
"Of course, we have not shown that there is no association.",
"Yet it's important to note that if there is no relationship between user ratings and simulatability, then simply querying humans about explanation quality will not provide a good indication of true explanation effectiveness.",
"When do explanations succeed at improving user accuracy, and when do they fail at doing so?",
"Below, we present example counterfactual test items, and we analyze how the explanations may have pointed to the reasons for model behavior.",
"For the example below, 5 of 6 Post test responses for Prototype and LIME were correct that the model output did not change for the counterfactual, up from 3 of 6 in the Pre test.",
"LIME identifies funny and moment as positive words, with weights adding to 1 .",
"04 after including the baseline.",
"The notable negative word is sucks ( w = . 23 ), which changes to a similar word (bothers).",
"All together, LIME suggests the prediction would stay the same since the positive words are unaffected and the only important negative word has a similar substitute.",
"The Prototype model gives the most activated prototype: Murders by Numbers isn't a great movie, but it's a perfectly acceptable widget.",
"It identifies but and funny as important words for the prototype's activation.",
"The counterfactual is still similar to the prototype in key ways, suggesting the prediction would not change.",
"For the item below, only 7 of 13 responses were correct after seeing explanations, with no method improving correctness relative to the Pre test accuracy.",
"Users needed to predict that the model prediction changed to negative for the counterfactual.",
"Original ( y = pos ): A bittersweet film, simple in form but rich with human events.",
"Counterfactual ( y c = neg ): A teary film, simple in form but vibrant with devoid events.",
"Anchor gives one word as a condition for the original positive prediction: bittersweet.",
"But what happens when bittersweet changes to teary?",
"The Anchor explanation does not actually apply to this counterfactual scenario, as its probabilistic description of model behavior is conditioned on the word bittersweet being present.",
"LIME gives five words, each with small weights ( | w | < . 04 ), while the baseline is .",
"91 .",
"This suggests that LIME has failed to identify features of the input that are necessary to the model output.",
"Among these five words are the three that changed between sentences, but we would not suspect from their weights that the changes made in the counterfactual would flip the model output.",
"Decision Boundary gives a counterfactual input with a negative prediction: A sappy film, simple in link but unique with human events.",
"However, it is difficult to tell whether this counterfactual sentence is similar in decision-relevant ways to the proposed counterfactual sentence.",
"The Prototype model gives the activated prototype for the original prediction: Watstein handily directs and edits around his screenplay's sappier elements...and sustains Off the Hook 's buildup with remarkable assuredness for a first-timer.",
"No important words are selected.",
"We are left without a clear sense of why this was the most similar prototype and what circumstances would lead to the model output changing.",
"These examples reveal areas for improvement in explanations.",
"Better methods will need to distinguish between sufficient and necessary factors in model behavior and clearly point to the ways in which examples share decision-relevant characteristics with new inputs.",
"Further, they must do so in the appropriate feature space for the problem at hand, especially for models of complex data.",
"Forward Tests Stretch User Memory.",
"We show users 16 examples during learning phases but do not allow them to reference the learning data during prediction phases.",
"Reasonably, some users reported that it was difficult to retain insights from the learning phase during later prediction rounds.",
"Generating Counterfactual Inputs.",
"It may be difficult to algorithmically construct counterfactual inputs that match the true data distribution, especially when seeking to change the model prediction.",
"Our text counterfactuals are regularly out of the data distribution, in the sense that no real movie review would exhibit the word choice they do.",
"We still consider these inputs to be of interest, for the reason that a model will handle such inputs in some manner, and we aim to assess all possible model behaviors in our analysis.",
"Fair Comparison of Explanation Methods.",
"In our forward simulation treatment phases, we provide users with 16 explained instances and allow them to read at their own pace.",
"We control for the number of data points between methods, but one could instead control for user exposure time or computation time of explanation generation.",
"Further, for LIME and Anchor, there are approaches for efficiently covering the space of inputs with a limited budget of examples (Ribeiro et al., 2018).",
"We opt not to use them since 1) they are not applicable to the Decision Boundary and Prototype methods, which lack a similar notion of coverage, and 2) it is not clear whether these approaches are useful for text data.",
"It may be that when using such approaches, LIME and Anchor perform better on forward simulation tasks.",
"Simulatability metrics give a quantitative measure of interpretability, capturing the intuition that explanations should improve a person's understanding of why a model produces its outputs.",
"In this paper, we evaluated five explanation methods through simulation tests with text and tabular data.",
"These are the first experiments to fully isolate the effect of algorithmic explanations on simulatability.",
"We find clear improvements in simulatability only with LIME for tabular data and our Prototype method in counterfactual tests.",
"It also appears that subjective user ratings of explanation quality are not predictive of explanation effectiveness in simulation tests.",
"These results suggest that we must be careful about the metrics we use to evaluate explanation methods, and that there is significant room for improvement in current methods.",
"We thank the reviewers for their helpful feedback and our study users.",
"This work was supported by NSF-CAREER Award 1846185, DARPA MCS Grant N66001-19-2-4031, a Royster Society PhD Fellowship, and Google and AWS cloud compute awards.",
"The views contained in this article are those of the authors and not of the funding agency."
] | [
"abstain",
"objective",
"abstain",
"method",
"result",
"result",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"other",
"method",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"We present the Berkeley Crossword Solver, a state-of-the-art approach for automatically solving crossword puzzles.",
"Our system works by generating answer candidates for each crossword clue using neural question answering models and then combines loopy belief propagation with local search to find full puzzle solutions.",
"Compared to existing approaches, our system improves exact puzzle accuracy from 57% to 82% on crosswords from The New York Times and obtains 99.9% letter accuracy on themeless puzzles.",
"Our system also won first place at the top human crossword tournament, which marks the first time that a computer program has surpassed human performance at this event.",
"To facilitate research on question answering and crossword solving, we analyze our system's remaining errors and release a dataset of over six million question-answer pairs.",
"The key to solving crosswords is mental flexibility. If one answer doesn't seem to be working out, try something else.",
"Crossword puzzles are perhaps the world's most popular language game, with millions of solvers in the United States alone (Ginsberg, 2011).",
"Crosswords test knowledge of word meanings, trivia, commonsense, and wordplay, while also requiring one to simultaneously reason about multiple intersecting answers.",
"Consequently, crossword puzzles provide a testbed to study open problems in AI and NLP, ranging from question answering to search and constraint satisfaction.",
"In this paper, we describe an end-to-end system for solving crossword puzzles that tackles many of these challenges.",
"Crossword puzzles are word games consisting of rectangular grids of squares that are to be filled in with letters based on given clues (e.g., Figure 1).",
"Puzzles typically consist of 6080 clues that vary in difficulty due to the presence of complex wordplay, intentionally ambiguous clues, or esoteric knowledge.",
"Each grid cell belongs to two words, meaning that one must jointly reason about answers to multiple questions.",
"Most players complete crosswords that are published daily in newspapers and magazines such as The New York Times (NYT), while other more expert enthusiasts also compete in live events such as the American Crossword Puzzle Tournament (ACPT).",
"These events are intensely competitive: one previous winner reportedly solved twenty puzzles per day as practice (Grady, 2010), and top competitors can perfectly solve expert-level puzzles with over 100 clues in just 3 minutes.",
"Automated crossword solvers have been built in the past and can outperform most hobbyist humans.",
"Two of the best such systems are Proverb (Littman et al., 2002) and Dr. Fill (Ginsberg, 2011).",
"Despite their reasonable success, past systems struggle to solve the difficult linguistic phenomena present in crosswords, and they fail to outperform expert humans.",
"At the time of their respective publications, Proverb achieved 213th place out of 252 in the ACPT, while Dr. Fill achieved 43rd place.",
"Answering crossword clues involves challenges not found in traditional question answering (QA) benchmarks.",
"The clues are typically less literal; they span different reasoning types (c.f., Table 1); and they cover diverse linguistic phenomena such as polysemy, homophony, puns, and other types of wordplay.",
"Many crossword clues are also intentionally underspecified, and to solve them, one must be able to know what they don't know and defer answering those clues until crossing letters are known.",
"Crosswords are also useful from a practical perspective as the data is abundant, well-validated, diverse, and constantly evolving.",
"In particular, there are millions of question-answer pairs online, and unlike crowdsourced datasets that are often rife with artifacts (Gururangan et al., 2018; Min et al., 2019), crossword clues are written and validated by experts.",
"Finally, crossword data is diverse as it spans many years of pop culture, is written by thousands of different constructors, and contains various publisher-specific idiosyncrasies.",
"Solving crosswords goes beyond just generating answers to each clue.",
"Without guidance from a constraint solver, QA models cannot reconcile crossing letter and length constraints.",
"Satisfying these constraints is challenging because the search space is enormous and many valid solutions exist, only one of which is correct.",
"Moreover, due to miscalibra-tion in the QA model predictions, exact inference may also lead to solutions that are high-likelihood but completely incorrect, similar to other types of structured decoding problems in NLP (Stahlberg and Byrne, 2019; Kumar and Sarawagi, 2019).",
"Finally, the challenges in search are amplified by the unique long tail of crossword answers, e.g., daaa bears or eeny meeny miny moe , which makes it highly insufficient to restrict the search space to solutions that contain only common English words.",
"We present the Berkeley Crossword Solver (BCS), which is summarized in Figure 2.",
"The BCS is based on the principle that some clues are difficult to answer without any letter constraints, but other (easier) clues are more standalone.",
"This naturally motivates a multi-stage solving approach, where we first generate answers for each question independently, fill in the puzzle using those answers, and then rescore uncertain answers while conditioning on the predicted letter constraints.",
"We refer to these stages as first-pass QA, constraint resolution, and local search, and we describe each component in Sections 35 after describing our dataset 3074 Question Answering Local Search Loopy Belief Propagation 1 4 5 3 2 1 4 5 3 2 1 4 5 3 2 1 4 5 3 2 Acros Backward tu Pikachu traine Exclamation of surprise",
"in Section 2.",
"In Section 6, we show that the BCS substantially improves over the previous state-of-the-art Dr. Fill system, perfectly solving 82% of crosswords from The New York Times , compared to 57% for Dr. Fill.",
"Nevertheless, room for additional improvement remains, especially on the QA front.",
"To facilitate further exploration, we publicly release our code, models, and dataset: https:// github.com/albertkx/berkeley-crossword-solver .",
"This section describes the dataset that we built for training and evaluating crossword solving systems.",
"Recall that a crossword puzzle contains both question-answer pairs and an arrangement of those pairs into a grid (e.g., Figure 1).",
"Unfortunately, complete crossword puzzles are protected under copyright agreements; however, their individual question-answer pairs are free-to-use.",
"Our dataset efforts thus focused on collecting numerous question-answer pairs (Section 2.1) and we collected a smaller set of complete puzzle grids to use for final evaluation (Section 2.2).",
"We collected a dataset of over six million question-answer pairs from top online publishers such as The New York Times , The LA Times , and USA Today",
"We show qualitative examples in Table 1, summary statistics in Table 2, and additional breakdowns in Appendix B. Compared to existing QA datasets, our crossword dataset represents a unique and challenging testbed as it is large and carefully labeled, is varied in authorship, spans over 70 years of pop culture, and contains examples that are difficult for even expert humans.",
"We built validation and test sets by splitting off every question-answer pair used in the 2020 and 2021 NYT puzzles.",
"We use re-cent NYT puzzles for evaluation because the NYT is the most popular and well-validated crossword publisher, and because using newer puzzles helps to evaluate temporal distribution shift.",
"Word Segmentation of Answers Crossword answers are canonically filled in using all capital letters and without spaces or punctuation, e.g., whale that stinks becomes WHALETHATSTINKS .",
"These unsegmented answers may confuse neural QA models that are pretrained on natural English text that is tokenized into wordpieces.",
"To remedy this, we trained a word segmentation model that maps the clues to their natural language form.",
"1 We collected segmentation training data by retrieving common n -grams from Wikipedia and removing their spaces and punctuation.",
"We then finetuned GPT-2 small (Radford et al., 2019) to generate the segmented n -gram given its unsegmented version.",
"We ran the segmenter on all answers in our data.",
"In all our experiments, we train our QA models using segmented answers and we post-hoc remove spaces and punctuation from their predictions.",
"To evaluate our final crossword solver, we collected a validation and test set of complete 2020 and 2021 puzzle grids.",
"We use puzzles from The New York Times , The LA Times , Newsday , The New Yorker , and The Atlantic .",
"Using multiple publishers for 1 More simplistic algorithms that segment the answer into known English words are insufficient for many crossword answers, e.g., DAAABEARS and EENYMEENYMINYMOE .",
"evaluation provides a unique challenge as each publisher contains different idiosyncrasies, answer distributions, and crossword styles.",
"We use 2020 NYT as our validation set and hold out all other puzzles for testing.",
"There are 430 total test puzzles.",
"The initial step of the BCS is question answering: we generate a list of possible answer candidates and their associated probabilities for each clue.",
"A key requirement for this QA model is that it does not output unreasonable or overly confident answers for hard clues.",
"Instead, this model is designed to be used as a first-pass that generates reasonable candidates for every clue, in hope that harder clues can be reconciled later when predicted letter constraints are available.",
"We achieve this by restricting our first-pass QA model to only output answers that are present in the training set.",
"As discussed in Section 5, we later generate answers outside of this closed-book set with our second-pass QA model.",
"Model Architecture We build our QA model based on a bi-encoder architecture (Bromley et al., 1994; Karpukhin et al., 2020) due to its ability to score numerous answers efficiently and learn using few examples per answer.",
"We have two neural network encoders: EC ( ) , the clue encoder, and EA ( ) , the answer encoder.",
"Both encoders are initialized with BERT-base-uncased (Devlin et al., 2019) and output the encoder's [CLS] representation as the final encoding.",
"These two encoders are trained to map the questions and answers into the same feature space.",
"Given a clue c , the model scores all possible answers a i using a dot product similarity function between feature vectors: sim( c, a i ) = EC ( c ) TEA ( a i ) .",
"Our answer set consists of the 437.8K answers in the training data.",
"2 2 Our bi-encoder model is a closed-book QA model because it does not have open-book access to external knowl-Training We train the encoders in the same fashion as DPR (Karpukhin et al., 2020): batches consist of clues, answers, and distractor answers.",
"The two encoders are trained jointly to assign a high similarity to the correct question-answer pairs and low similarity to all other pairs formed between the clue and distractor answers.",
"We use one distractor answer per clue that we collect by searching each clue in the training set using TFIDF and returning the top incorrect answer.",
"We tune hyperparameters of our bi-encoder model based on its topk accuracy on the NYT validation set.",
"Inference At test time, for each clue c , we compute the embedding v c = EC ( c ) and retrieve the answers whose embeddings have the highest dot product similarity with v c .",
"We obtain probabilities for each answer by softmaxing the dot product scores.",
"To speed up inference, we precompute the answer embeddings and use FAISS (Johnson et al., 2019) for similarity scoring.",
"To evaluate our bi-encoder, we compute its topk recall on the question-answer pairs from the NYT test set.",
"We are most interested in top-1000 recall, as we found it to be highly-correlated with downstream solving performance (discussed in Section 7).",
"As a baseline, we compare against the QA portion of the previous state-of-the-art Dr. Fill crossword solver (Ginsberg, 2011).",
"This QA model works by ensembling TFIDF-like scoring and numerous additional modules (e.g., synonym matching, POS matching).",
"Our bi-encoder model considerably outperforms Dr. Fill, improving top-1000 recall from 81.2% to 94.6% (Figure 3).",
"Also note that approximately 4% of test answers are not seen during training, and thus the oracle recall for our first-pass QA model is 96%.",
"Given the list of answer candidates and their associated probabilities from the first-pass QA model, we next built a solver that produces a puzzle solution that satisfies the letter constraints.",
"Formally, crossword solving is a weighted constraint satisfaction problem, where the probability over solutions is given by the product of the confidence scores produced by the QA model (Ginsberg, 2011).",
"There edge sources such as Wikipedia (Roberts et al., 2020).",
"We found in preliminary experiments that open-book models struggle as most crossword answers are not present or are difficult to retrieve from knowledge sources such as Wikipedia.",
"are numerous algorithms for solving such problems, including branch-and-bound, integer linear programming, and more.",
"We use belief propagation (Pearl, 1988), henceforth BP, for two reasons.",
"First, BP directly searches for the puzzle with the highest expected overlap with the ground-truth puzzle, rather than the puzzle with the highest likelihood under the QA model (Littman et al., 2002).",
"This is advantageous as it maximizes the total number of correct words and letters in the solution, and it also avoids strange solutions that may have spuriously high scores under the QA model.",
"Second, BP also produces marginal distributions over words and characters, which is useful for generating an n -best list of puzzle candidates (used in Section 5).",
"Loopy Belief Propagation We use loopy BP, inspired by the Proverb crossword solver (Littman et al., 2002).",
"That is, we construct a bipartite graph with nodes for each of the crossword's clues and cells.",
"For each clue node, we connect it via an edge to each of its associated cell nodes (e.g., a 5-letter clue will have degree 5 in the constructed graph).",
"Each clue node maintains a belief state over answers for that clue, which is initialized using a mixture of the QA model's probabilities and a unigram letter LM.",
"3 Each cell node maintains a belief state over letters for that cell.",
"We then iteratively 3 The unigram letter LM accounts for the probability that an answer is not in our answer set.",
"apply BP with each iteration doing message passing for all clue nodes in parallel and then for all cell nodes in parallel.",
"The algorithm empirically converges after 510 iterations and completes in just 10 seconds on a single-threaded Python process.",
"Greedy Inference BP produces a marginal distribution over words for each clue.",
"To generate an actual puzzle solution, we run greedy search where we first fill in the answer with the highest marginal likelihood, remove any crossing answers that do not share the same letter, and repeat.",
"Many of the puzzle solutions generated by BP are close to correct but have small letter mistakes, e.g., NAUCI instead of FAUCI or TAZOAMBASSADORS instead of JAZZAMBASSADORS , as shown in Figure 4.",
"4 We remedy this in the final stage of the BCS with local search (LS), where we take a second-pass through the puzzle and score alternate proposals that are a small edit distance away from the BP solution.",
"In particular, we alternate between proposing new candidate solutions by flipping uncertain letters and scoring those proposals using a second-pass QA model.",
"Proposing Alternate Solutions Similar to related problems in structured prediction (Stahlberg and Byrne, 2019) or model-based optimization (Fu and Levine, 2021), the key challenge in searching for alternate puzzle solutions is to avoid false positives and adversarial inputs.",
"If we score every proposal within a small edit distance to the original, we are bound to find nonsensical character flips that nevertheless lead to higher model scores.",
"We avoid this by only scoring proposals that are within a 2-letter edit distance and also have nontrivial likelihoods according to BP or a dictionary.",
"Specifically, we score all proposals whose 12 modified letters each have probability 0.01 or greater under the character marginal probabilities produced by BP.",
"5 We also score all proposals whose 12 modified letters 4 These errors stem from multiple sources.",
"First, 4% of the answers in a test crossword are not present in our bi-encoder's answer set.",
"Those answers will be not be filled in correctly unless the solver can identify the correct answer for all of the crossing answers.",
"Second, natural QA errors exist even on questions with non-novel answers.",
"Finally, the BP algorithm may converge to a sub-optimal solution.",
"5 The character-level marginal distribution for most characters assigns all probability mass to a single letter after a few iterations of BP (e.g., probability 0.9999).",
"We empirically chose 0.01 as it achieved the highest validation accuracy.",
"cause the corresponding answer to segment into valid English words.",
"6 Scoring Solutions With Second-Pass QA Given the alternate puzzle solutions, we could feed each of them into our bi-encoder model for scoring.",
"However, we found that bi-encoders are not robust they sometimes produce high-confidence predictions for the nonsensical answers present in some candidate solutions.",
"We instead use generative QA models to score the proposed candidates as we found these models to be empirically more robust.",
"We finetuned the character-level model ByT5-small (Xue et al., 2022) on our training set to generate the answer from a given clue.",
"We then score each proposed candidate using the product of the model's likelihoods of the answers given the clues, (cid:81) j P ( a j | c j ) .",
"After scoring all candidate proposals, we apply the best-scoring edit and repeat the proposal and scoring process until no better edits exist.",
"Figure 4 shows an example of the candidates accepted by LS.",
"Quantitatively, we found that LS applied 243 edits that improved accuracy and 31 edits that hurt accuracy across 255 NYT test puzzles.",
"We evaluated our final system on our set of test puzzles and compare the results to the state-of-the-art Dr. Fill system (Ginsberg, 2011).",
"7 We compute 6 For instance, given a puzzle that contains a fill such as MUNNYANDCLYDE , we consider alternate solutions that contain answers such as BUNNYANDCLYDE and SUNNYANDCLYDE , as they segment to bunny and clyde and sunny and clyde . 7 Note that while the original Dr. Fill paper was published in 2011, the system has been consistently updated and has substantially improved.",
"Dr. Fill can outperform all but the best human solvers (see Table 5 for statistics on its improvement).",
"three accuracy metrics: perfect puzzle, word, and letter.",
"Perfect puzzle accuracy requires answering every clue in the puzzle correctly and serves as our primaryand most challengingmetric.",
"Table 3 shows our main results.",
"We outperform Dr. Fill on perfect puzzle accuracy across crosswords from every publication source.",
"For example, we obtain a 24.8% absolute improvement on perfect puzzle accuracy on crossword puzzles from The New York Times , which is a statistically significant improvement ( p < 0 . 01 ) according to a paired t -test.",
"We also observe comparable or better word and letter accuracies than Dr. Fill across all sources.",
"Our improvement on puzzles from The New Yorker is relatively small; this discrepancy is possibly due to the small amount of data from The New Yorker in our training set (see Figure 7).",
"Themed vs. Themeless Puzzles Although the BCS achieves equivalent or worse letter accuracy on Newsday and LA Times puzzles, it obtains substantially higher puzzle accuracy on these splits.",
"We attribute this behavior to errors concentrated in unique themed puzzles, e.g., ones that place multiple letters into a single cell.",
"To test this, we break down NYT puzzles into those with and without special theme entries (see Appendix D for our definition of theme puzzles).",
"On themeless NYT puzzles, we achieve 99.9% letter accuracy and 89.5% perfect puzzles, showing that themed puzzles are a major source of our errors.",
"Note that the Dr. Fill system includes various methods to detect and resolve themes and is thus more competitive on such puzzles, although it still underperforms our system.",
"our last evaluation, we competed live in the Amer-We",
"ican Crossword Puzzle Tournament (ACPT), the longest-running and most prestigious human crossword tournament.",
"We obtained special permission from the organizers to compete in the 2021 version of the tournament against 1,100 top human competitors.",
"For the live tournament, we used a version 1.0 of our system, which does not use belief propagation or local search but instead uses Dr. Fill's constraint-resolution system.",
"Our system won first placewe had a total score of 12,825 compared to the top human who had 12,810 (scor-ing details in Appendix C).",
"Figure 5 shows our scores compared to the top and median human competitor on the 7 puzzles used in the competition.",
"We also retrospectively evaluated our final BCS system (i.e., using our solver based on belief propagation and local search), and achieved a higher total score of 13,065.",
"This corresponds to getting 6 out of the 7 puzzles perfect and 1 letter wrong on 1 puzzle.",
"System Ablations We also investigated the importance of our QA model, BP inference, and local search with an ablation study.",
"Table 4 shows results for perfect puzzle accuracy on NYT 2021 puzzles under different settings.",
"The first ablation shows that our local search step is crucial for our solver to achieve high accuracy.",
"The second and third ablations show that the BCS's QA and solver are both superior to their counterparts from Dr. Fill swapping out either component hurts accuracy.",
"Our system outperforms the best human solvers; does this mean that crosswords are solved?",
"The answer is, of course, no.",
"In this section, we show that substantial headroom remains on QA accuracy and the handling of special themed puzzles.",
"der for our solver to find the correct solution.",
"We found that when our QA model ranks the true answer within the top 1,000 predictions, the answer is almost always filled in correctly (Figure 11).",
"Despite top-1000 accuracy typically being sufficient, our QA model still makes numerous errors.",
"We manually analyzed these mistakes by sampling 200 errors from the NYT 2021 puzzles and placing them in the same categories used in Table 1.",
"Figure 6 shows the results and indicates that knowledge, wordplay, and cross-reference clues make up the majority of errors.",
"End-to-end Analysis We next analyzed the errors for our full system.",
"There are 43 NYT 2021 puzzles that we did not solve perfectly.",
"We manually separated these puzzles into four categories: Themes (21 puzzles).",
"Puzzles with unique themes, e.g., placing four characters in one cell.",
"Local Search Proposals (9 puzzles).",
"Puzzles where we did not propose a puzzle edit in local search that would have improved accuracy.",
"Local Search Scoring (9 puzzles).",
"Puzzles where the ByT5 scorer either rejected a correct proposal or accepted an incorrect proposal.",
"Connected Errors (4 puzzles).",
"Puzzles with 3079 1 2 3 4 5 6 7 ACPT Puzzle Number 0 20 40 60 80 100 P e r ce n t o f P e r f ec t S c o r e ( % ) 100 98 88 100 97 79 100 96 76 100 98 86 89 93 34 100 96 81 91 97 76 BCS + Dr. Fill Top Competitor Median Competitor Figure 5: A breakdown of our 2021 ACPT performance.",
"Overall, the largest source of remaining puzzle failures is special themed puzzles, which is unsurprising as our solver does not explicitly handle themes.",
"The remaining errors are mostly split between proposal and scoring errors.",
"Finally, connected errors typically arise when BP fills in an answer that is in our bi-encoder's answer set but is incorrect, i.e., the first-pass model was overconfident.",
"Past Crossword Solvers Prior to our work, the three most successful automated crossword solvers were Proverb, WebCrow (Ernandes et al., 2005), and Dr. Fill.",
"Dr. Fill uses a relatively straightforward TFIDF-like search for question answering, but Proverb and WebCrow combine a number of bespoke modules for QA; WebCrow also relies on a search engine to integrate external knowledge.",
"On the solving side, Proverb and WebCrow both use loopy belief propagation, combined with A* search for inference.",
"Meanwhile, Dr. Fill, uses a modified depth-first search known as limited discrepancy search, as well as a post-hoc local search with heuristics to score alternate puzzles.",
"Standalone QA Models for Crosswords Past work also evaluated QA techniques using crossword question-answer pairs.",
"These include linear models (Barlacchi et al., 2014), WordNet suggestions (Thomas and S., 2019), and shallow neural networks (Severyn et al., 2015; Hill et al., 2016); we instead use state-of-the-art transformer models.",
"clues while maintaining accurate estimates of model uncertainty.",
"Other QA tasks share similar challenges (Ferrucci et al., 2010; Rodriguez et al., 2021; Rajpurkar et al., 2018; Min et al., 2020).",
"Crossword puzzles pose a novel challenge as they contain unique types of reasoning and linguistic phenomena such as wordplay.",
"Crossword Themes We have largely ignored the presence of themes in crossword puzzles.",
"Themes range from simple topical similarities between answers to puzzles that must be filled in a circular pattern to be correct.",
"While Dr. Fill (Ginsberg, 2011) has a variety of theme handling modules built into it, integrating themes into our probabilis-3080 tic formulation remains as future work.",
"Cryptic Crosswords We solve American-style crosswords that differ from British-style cryptic crosswords (Efrat et al., 2021; Rozner et al., 2021).",
"Cryptic crosswords involve a different set of conventions and challenges, e.g., more metalinguistic reasoning clues such as anagrams, and likely require different methods from those we propose.",
"We have presented new methods for crossword solving based on neural question answering, structured decoding, and local search.",
"Our system outperforms even the best human solvers and can solve puzzles from a wide range of domains with perfect accuracy.",
"Despite this progress, some challenges remain in crossword solving, especially on the QA side, and we hope to spur future research in this direction by releasing a large dataset of question-answer pairs.",
"In future work, we hope to design new ways of evaluating automated crossword solvers, including testing on puzzles that are designed to be difficult for computers and tasking models with puzzle generation.",
"Our data comes primarily from crosswords published in established American newspapers and journals, where a lack of diversity among puzzle constructors and editors may influence the types of clues that appear.",
"For example, only 21% of crosswords published in The New York Times have at least one woman constructor (Chen, 2021) and a crossword from January 2019 was criticized for including a racial slur as an answer (Graham, 2019).",
"We view the potential for real-world harm as limited since automated crossword solvers are unlikely to be deployed widely in the real world and have limited potential for dual use.",
"However, we note that these considerations may be important to researchers using our data for question answering research more broadly.",
"We thank Sewon Min, Sameer Singh, Shi Feng, Nikhil Kandpal, Michael Littman, and the members of the Berkeley NLP Group for their valuable feedback.",
"We are also grateful to Will Shortz and the organizers of the American Crossword Puzzle Tournament for allowing us to compete in the event.",
"This work was funded in part by the DARPA XAI and LwLL programs.",
"Nicholas Tomlin is supported by the National Science Foundation Graduate Research Fellowship."
] | [
"objective",
"result",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other"
] |
[
"Misinformation has recently become a well-documented matter of public concern.",
"Existing studies on this topic have hitherto adopted a coarse concept of misinformation, which incorporates a broad spectrum of story types ranging from political conspiracies to misinterpreted pranks.",
"This paper aims to structurize these misinformation stories by leveraging fact-check articles.",
"Our intuition is that key phrases in a fact-check article that identify the misinformation type(s) ( e.g., doctored images, urban legends) also act as rationales that determine the verdict of the fact-check ( e.g., false).",
"We experiment on rationalized models with domain knowledge as weak supervision to extract these phrases as rationales, and then cluster semantically similar rationales to summarize prevalent misinformation types.",
"Using archived fact-checks from Snopes.com, we identify ten types of misinformation stories.",
"We discuss how these types have evolved over the last ten years and compare their prevalence between the 2016/2020 US presidential elections and the H1N1/COVID-19 pandemics.",
"Misinformation has raised increasing public concerns globally, well-documented in Africa (Ahinko-rah et al., 2020), Asia (Kaur et al., 2018), and Europe (Fletcher et al., 2018).",
"In the US, fake news accounted for 6% of all news consumption during the 2016 US presidential election (Grinberg et al., 2019).",
"Years later, 29% of US adults in a survey believed that the exaggerated threat of the COVID-19 pandemic purposefully damaged former US president Donald Trump (Uscinski et al., 2020), and 77% of Trump's supporters believed voter fraud manipulated the 2020 US presidential election in spite of a complete lack of evidence (Pennycook and Rand, 2021).",
"As such misinformation continues to threaten society, researchers have started investigating this multifaceted problem, from understanding the socio-psychological foundations of susceptibility (Bakir and McStay, 2018) and measuring public responses (Jiang and Wilson, 2018; Jiang et al., 2020b), to designing detection algorithms (Shu et al., 2017) and auditing countermeasures for online platforms (Jiang et al., 2019, 2020c).",
"These studies mostly adopted the term misin-formation as a coarse concept for any false or inaccurate information, which incorporates a broad spectrum of misinformation stories, e.g., political conspiracies to misinterpreted pranks.",
"Although misinformation types have been theorized and categorized by practitioners (Wardle, 2017), there is, to our knowledge, no empirical research that has systematically measured these prevalent types of misinformation stories.",
"This paper aims to unpack the coarse concept of misinformation and structurize it to fine-grained story types (as illustrated in Figure 1).",
"We conduct this query through an empirical lens and ask the question: what are the prevalent types of misinformation stories in the US over the last ten years?",
"The answer to our question is buried in archived fact-checks, which are specialized news articles that verify factual information and debunk false claims by presenting contradictory evidence (Jiang et al., 2020a).",
"As a critical component of their semi-structured journalistic style, fact-checks often embed the (mis)information type(s) within their steps of reasoning.",
"For example, consider the following snippet from a Snopes.com fact-check with a verdict of false (Evon, 2019): ...For instance, some started sharing a doctored photograph of Thunberg with alt-right boogeyman George Soros (the original photograph featured former Vice President Al Gore)...",
"The key phrase doctored photograph in the snippet identifies the misinformation type of the fact-checked story.",
"Additional example phrases are highlighted in Figure 1. With a large corpus of fact-checks, these phrases would accumulate and reveal prevalent types of misinformation stories.",
"Extracting these phrases is a computational task.",
"Our intuition is that such phrases in a fact-check also act as rationales that determine the verdict of the fact-check.",
"In the previous example, the verdict is false in part because the story contains a doctored photograph .",
"Therefore, a neural model that predicts the verdict of a fact-check would also use the misinformation types as rationales.",
"To realize this intuition, we experiment on existing rationalized neural models to extract these phrases (Lei et al., 2016; Jain et al., 2020), and, to target specific kinds of rationales, we additionally propose to include domain knowledge as weak supervision in the rationalizing process.",
"Using public datasets as validation (Zaidan et al., 2007; Carton et al., 2018), we evaluate the performance variation of different rationalized models, and show that including domain knowledge consistently improves the quality of extracted rationales.",
"After selecting the most appropriate method, we conduct an empirical investigation of prevalent misinformation types.",
"Using archived fact-checks from Snopes.com, spanning from its founding in 1994 to 2021, we extract rationales by applying the selected model with theorized misinformation types for weak supervision (Wardle, 2017), and then cluster rationales based on their semantic similarity to summarize prevalent misinformation types.",
"We identify ten types of misinformation stories, a preview of which are shown in Figure 1. Using our derived lexicon of these clustered misinformation stories, we then explore the evolution of misinformation types over the last ten years.",
"Our key findings include: increased prevalence of conspiracy theories, fabricated content, and digital manipulation; and decreased prevalence of legends and tales, pranks and jokes, mistakes and errors, etc.",
"We also conducted two case studies on notable events that involve grave misinformation.",
"From the case study of US presidential elections, we observe that the most prevalent misinformation type for both the 2016 and 2020 elections is fabricated content, while the 2016 election has more hoaxes and satires.",
"From the case study of pandemics, our results show that the H1N1 pandemic in 2009 has more legends and tales, while the COVID-19 pandemic attracts more conspiracy theories.",
"There is a rich literature that has studied the online misinformation ecosystem from multiple perspectives (Del Vicario et al., 2016; Lazer et al., 2018).",
"Within the computational linguistics community, from an audiences' perspective, Jiang and Wilson (2018) found that social media users expressed different linguistic signals when responding to false claims, and the authors later used these signals to model and measure (dis)beliefs in (mis)information (Jiang et al., 2020b; Metzger et al., 2021).",
"From a platforms' perspective, researchers have assisted platforms in designing novel misinformation detection methods (Wu et al., 2019; Lu and Li, 2020; Vo and Lee, 2018, 2020), as well as audited existing misinformation intervention practices (Robertson et al., 2018; Jiang et al., 2019, 2020c; Hussein et al., 2020).",
"In this work, we study another key player in the misinformation ecosystem, storytellers , and investigate the prevalent types of misinformation told to date.",
"From the storytellers' perspective, Wardle (2017) theorized several potential misinformation types ( e.g., satire or parody, misleading content, and false connection), yet no empirical evidence has been connected to this typology.",
"Additionally, researchers have investigated specific types of misinformation as case studies, e.g., state-sponsored disinformation (Starbird et al., 2019; Wilson and Starbird, 2020), fauxtography (Zannettou et al., 2018; Wang et al., 2021), and conspiracy theories (Samory and Mitra, 2018; Phadke et al., 2021).",
"In this paper, we aim to structurize these misinformation stories to theorized or novel types.",
"Realizing our intuition (as described in 1) requires neural models to (at least shallowly) reason about predictions.",
"In this section, we introduce existing rationalized neural models and propose to include domain knowledge as weak supervision in the rationalizing process.",
"We then experiment with public datasets and lexicons for evaluation.",
"In a standard text classification problem, each instance is in a form of ( x , y ) .",
"x = [ x i ] V lx is the input token sequence of length l , where V x is the vocabulary of the input and i is the index of each token x i .",
"y { 0 , 1 } m is the binary label of length m .",
"Rationalization requires a model to output the prediction y together with a binary mask z = [ z i ] { 0 , 1 } l of input length l , indicating which tokens are used ( i.e., z i = 1 ) to make the decision.",
"These tokens are called rationales .",
"Hard rationalization requires a model to directly output z .",
"Initially proposed by Lei et al. (2016), the model first passes the input x to a tagger 1 module and samples a binary mask z from a Bernoulli distribution, i.e., z Tagger ( x ) , and then uses only unmasked tokens to make a prediction of y , i.e., y = Predictor ( z , x ) .",
"2 The loss function of this method contains two parts.",
"The first part is a standard loss for the prediction L y ( y , y ) , which can be realized using common classification loss, e.g., cross entropy.",
"The second part is a loss L z ( z ) 3 aiming to regularize z and encourage conciseness and contiguity of rationale selection, formulated by Lei et al. (2016).",
"Recent work proposed to improve the initial model with an adversarial component (Yu et al., 2019; Carton et al., 2018).",
"Combining these parts together, the 1 This module was named generator by Lei et al. (2016).",
"We name it tagger to distinguish it from the NLG problem.",
"2 This module was named encoder by Lei et al. (2016).",
"We name it predictor , consistent with Yu et al. (2019), to distinguish it from the encoder-decoder framework.",
"3 L z ( z ) is a simplified term; we discuss its detailed implementation in Appendix A.",
"model is trained end-to-end using reinforce-style estimation (Williams, 1992), as sampling rationales is a non-differentiable computation.",
"The modules of hard rationalization are illustrated in Figure 2. Soft rationalization , in contrast, allows a model to first output a continuous version of importance scores s = [ s i ] R l , and then binarize it to get z .",
"Initially formalized by Jain et al. (2020) as a multiphase method, the model first conducts a standard text classification using a supporter module y = Supporter ( x ) and outputs importance scores s , then binarizes s using a tagger module, i.e., z = Tagger ( s ) , and finally uses only unmasked tokens of x to make another prediction y to evaluate the faithfulness of selected rationales.",
"4 These three modules are trained separately in three phases.",
"5 Since the supporter and predictor are standard text classification modules the only loss needed is for the prediction L y ( y , y ) .",
"This method is more straightforward than the hard rationalization method, as it avoids non-differentiable com-4 The second and third modules were named extractor and classifier by Jain et al. (2020).",
"We continue using tagger and predictor to align with the hard rationalization method.",
"putations and the instability induced by reinforce-style estimation.",
"The modules of soft rationalization are also illustrated in Figure 2. The popular attention mechanism (Bahdanau et al., 2014) provides built-in access to s .",
"Although there have been debates on the properties achieved by attention-based explanations (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019; Serrano and Smith, 2019), rationales extracted by straightforward rules on attention weights were demonstrated as comparable to human-generated rationales (Jain et al., 2020).",
"Additionally, in our use case we only need the rationales themselves as key phrases and do not require them to faithfully predict y , therefore the last predictor module can be omitted.",
"Both hard and soft rationalization methods can be trained with or without supervision w.r.t. rationales z (DeYoung et al., 2020) 6 .",
"When rationales are selected in an unsupervised manner, the model would intuitively favor rationales that are most informative to predict the corresponding label as a result of optimizing the loss function.",
"This could result in some undesirable rationales in our case: for example, certain entities like COVID-19 or Trump that are highly correlated with misinformation would be selected as rationales even though they do not suggest any misinformation types.",
"Therefore, we propose to weakly supervise 7 the rationalizing process with domain knowledge to obtain specific, desired types of rationales.",
"Assuming a lexicon of vocabulary V d as domain knowledge, we reprocess the input and generate weak labels for rationales z d = [ z i d ] { 0 , 1 } l where z i d = 1 ( i.e., unmasked) if x i V d and z i d = 0 ( i.e., masked) otherwise.",
"Then, we include an additional loss item L d ( z , z d ) or L d ( s , z d ) for the hard or soft rationalization method.",
"Combining the loss items together, the objective for the end-to-end hard rationalization model is: min L y ( y , y ) + z L z ( z ) + d L d ( z , z d ) , where contains the parameters to estimate and ( ) are hyperparameters weighting loss items.",
"Similarly, the objective function for the first phase of soft rationalization is: min L y ( y , y ) + d L d ( s , z d ) .",
"6 They are trained with supervision w.r.t. the label y .",
"7 Since there is inherently no ground-truth of misinformation types in fact-check articles.",
"We conduct experiments on public datasets to evaluate the performance of hard and soft rationalization methods, particularly for our needs, and confirm that including domain knowledge as weak supervision helps with the rationalizing process.",
"Datasets selection.",
"An ideal dataset for our models should meet the following requirements:",
"(a) formulated as a text classification problem,",
"(b) annotated with human rationales, and",
"(c) can be associated with high quality lexicons to obtain domain knowledge.",
"We select two datasets based on these criteria: the movie reviews dataset released by Pang et al. (2002) and later annotated with rationales by Zaidan et al. (2007), which contains 2K movie reviews labeled with positive or negative sentiments; and the personal attacks dataset released by Wulczyn et al. (2017) and later annotated with rationales by Carton et al. (2018), which contains more than 100K Wikipedia comments labeled as personal attacks or not.",
"Domain knowledge.",
"For the sentiment analysis on movie reviews, we use the EmoLex lexicon released by Mohammad and Turney (2013), which contains vocabularies of positive and negative sentiments.",
"For identifying personal attacks, we use a lexicon released by Wiegand et al. (2018), which contains a vocabulary of abusive words.",
"With corresponding vocabularies, we generate weak rationale labels z d for each dataset.",
"Evaluation metrics.",
"We choose binary precision Pr ( z ) to evaluate the quality of extracted rationales, because",
"(a) a perfect recall can be trivially achieved by selecting all tokens as rationales, 8 and",
"(b) our case of identifying key phrases requires concise rationales.",
"Additionally, we measure the average percentage of selected rationales over the input length % ( z ) .",
"For predictions, we use macro F 1 ( y ) as the evaluation metric as well as the percentage of information used % ( x ) to make the prediction.",
"Experimental setup and results.",
"The train, dev, and test sets are pre-specified in public datasets.",
"We optimize hyperparameters for F 1 ( y ) on the dev sets, and only evaluate rationale quality Pr ( z ) after a model is decided.",
"We discuss additional implementation details ( e.g., hyperparameters, loss functions, module cells) in Appendix A. 8 We later show that this is the default model behavior if rationale selection is under-regularized.",
"The evaluation results for all our experiments on test sets are reported in Table 1, indexed with h 0 -h 3 and s 0 -s 3 .",
"We report the evaluation results on dev sets in Appendix B. Regularization for hard rationalization.",
"h 0 and h 2 are our re-implementation of Lei et al. (2016), varying the rationale regularization hyperparameter z .",
"Our experiments show that z is a crucial choice.",
"When a small z is chosen ( i.e., rationales are under-regularized), the model has a tendency to utilize all the available information to optimize the predictive accuracy.",
"In h 2 , we set z = 0 and the model selects 99.9% of tokens as rationales while achieving the best F 1 ( y ) overall, which is an undesirable outcome in our case.",
"Therefore, we increases z so that only small parts of tokens are selected as rationales in h 0 .",
"However, echoing Jain et al. (2020), the output when varying z is sensitive and unpredictable, and searching for this hyperparameter is both time-consuming and energy-inefficient.",
"We also run an experiment h 3 with the additional adversarial component proposed in (Carton et al., 2018; Yu et al., 2019), and the evaluation metrics are not consistently improved compared to h 0 .",
"Binarization for soft rationalization.",
"s 0 , s 2 and s 3 are our re-implementation of Jain et al. (2020).",
"For soft rationalization, rationales are selected ( i.e., binarized) after the supporter module is trained in phase one, therefore s 0 -s 3 utilize 100% of the tokens by default, and achieve the best F 1 ( y ) overall.",
"We implement a straightforward approach to select rationales by setting a threshold t and make z i = 1 ( i.e., unmasked) if the importance score s i > t and z i = 0 ( i.e., masked) otherwise.",
"Intuitively, increasing t corresponds to less selected rationales, and therefore increasing Pr ( z ) .",
"To confirm, in s 2 , we increase t until % ( z ) is exactly half of s 0 .",
"Similarly, decreasing t corresponds to more selected rationales, and therefore decreasing Pr ( z ) .",
"In s 3 , we decrease t until % ( z ) is exactly double of s 0 .",
"Is domain knowledge helpful?",
"h 1 and s 1 include domain knowledge as weak supervision.",
"Our results show that domain knowledge improves Pr ( z ) for both hard (h 1 to h 0 ) and soft (s 1 to s 0 ) rationalization methods and on both dataset, while maintaining similar % ( z ) and F 1 ( y ) .",
"The improvements are more substantial for soft rationalization.",
"Hard vs. soft rationalization.",
"To fairly compare hard and soft rationalization methods, we choose the threshold t to keep % ( z ) the same for h 1 and s 1 .",
"9 Our experiments show that soft rationalization weakly supervised by domain knowledge achieves better Pr ( z ) on both datasets, and therefore we chose it for rationalizing fact-checks.",
"After determining that soft rationalization is the most appropriate method, we apply it to extract rationales from fact-checks.",
"In this section, we introduce the dataset we collected from Snopes.com and conduct experiment with fact-checks to structurize misinformation stories.",
"9 We can easily and accurately manipulate % ( z ) for soft rationalization by adjusting t ; conversely, the impact of adjusting z in hard rationalization is unpredictable.",
"2018).",
"We collect HTML webpages of fact-check articles from Snopes.com, spanning from its founding in 1994 to the beginning of 2021.",
"Preprocess and statistics.",
"We first preprocess collected fact-checks by extracting the main article content and verdicts from HTML webpages using a customized parser, and tokenizing the content with NLTK (Bird, 2006).",
"The preprocessing script is included in our released codebase.",
"After preprocessing, the median sequence length of fact-checks is 386 tokens, and 88.6% of fact-checks containing 1,024 tokens.",
"Jiang et al. (2020a) found that the most informative content in fact-checks tended to be located at the head or the tail of the article content.",
"Therefore, we set the maximum sequence length to 1,024 and truncate over-length fact-checks.",
"Next, we label each fact-check with a binary label depending on its verdict: (truthful) information if the verdict is at least mostly true and misinformation otherwise, which results in 2,513 information and 11,183 misinformation instances.",
"Additionally, we preemptively mask tokens that are the exact words as its verdict ( e.g., rate it as false to rate it as [MASK] ), 10 otherwise predicting the verdict would be trivial and the model would copy overlapping tokens as rationales.",
"Domain knowledge for misinformation types.",
"The domain knowledge comes from two sources:",
"(a) the misinformation types theorized by Wardle (2017), e.g., misleading or fabricated content; and",
"(b) certain variants of verdicts from Snopes.com such as satire or scam (Snopes.com, 2021a).",
"We combine these into a small vocabulary V d containing 12 words, listed in Appendix A. 4.2 Experiments and Results We randomly split the fact-checks to 80% train, 10% dev, and 10% test sets, and adjust hyperparameters to optimize F 1 ( y ) on dev set.",
"For initialization, we train word embeddings using Gensim (Re-hurek and Sojka, 2011) on the entire corpus.",
"The final model achieves F 1 ( y ) = 0 .",
"75 / 0 .",
"74 on the test set with/without domain knowledge.",
"Clustering rationales.",
"To systematically understand extracted rationales, we cluster these rationales based on semantic similarity.",
"For each rationale, we average word embeddings to represent 10 Verdicts from Snopes.com are structured HTML fields that can be easily parsed.",
"the embedding of the rationale, and then run a hierarchical clustering for these embeddings.",
"The hierarchical clustering uses cosine similarity as the distance metric, commonly used for word embeddings (Mikolov et al., 2013), and the complete link method (Voorhees, 1986) to obtain a relatively balanced linkage tree.",
"The results from the clustering are shown in Figure 3. From the root of the dendrogram, we can traverse its branches to find clusters until we reach a sensible threshold of cosine distance, and categorize the remaining branches and leaf nodes ( i.e., rationales) to multiple clusters.",
"Figure 3 shows an example visualization that contains ten clusters of rationales that are semantically similar to the domain knowledge, and leaf nodes in each cluster are aggregated to plot a word cloud, with the frequency of a node encoded as the font size of the phrase.",
"Note that rationales extracted from soft rationalization are dependent on the chosen threshold t to binarize importance scores.",
"The example in Figure 3 uses a threshold of t = 0 .",
"01 .",
"Varying the threshold would affect extracted rationales but mostly the ones with low prevalence, and these rare rationales also correspond to small font sizes in the word cloud.",
"Therefore, the effect from varying t would be visually negligible in Figure 3. Structure of misinformation stories.",
"First, the clusters empirically confirm existing domain knowledge in V d .",
"Certain theorized misinformation types, such as satires and parodies (cid:4) from (Wardle, 2017), are identified as individual clusters from fact-checks.",
"Second, the clusters complement V d with additional phrases describing (semantically) similar misinformation types.",
"For example, our results add humor and gossip to the same category as satires and parodies (cid:4) and add tales and lore to the same category as legends (cid:4) .",
"This helps us grasp the similarity between misinformation types, and also enriches the lexicon V d , which proves useful for subsequent analysis in 5.",
"Third, we discover novel, fine-grained clusters that are not highlighted in V d .",
"There are multiple possible explanations as to why these misinformation types form their own clusters.",
"Conspiracy theories (cid:4) are often associated with intentional political campaigns (Samory and Mitra, 2018) which can affect their semantics when referenced in fact-checks.",
"In contrast, digital alteration (cid:4) is a relatively recent misinformation tactic that has been enabled by technological developments such as FaceSwap (Ko-rshunova et al., 2017) and DeepFake (Westerlund, 2019).",
"Hoaxes and pranks (cid:4) often have a mischievous intent that distinguishes them from other clusters.",
"Other new clusters include clickbait with inflammatory and sensational language (cid:4) and entirely fictional content (cid:4) .",
"Fourth, the clusters reorganize the structure of these misinformation types based on their semantics, e.g., fabricated and misleading content (cid:4) belongs to two types of misinformation in (Wardle, 2017), while in our results they are clustered together.",
"This suggests that the semantic distance between fabricated and misleading content is less than the chosen similarity threshold, at least when these misinformation types are referred to by fact-checkers when writing articles.",
"Finally, the remaining words in V d are also found in our rationales.",
"However, due to low prevalence, they are not visible in Figure 3 and do not form their own clusters.",
"In this section, we leverage the clusters of misinformation types identified by our method as a lexicon and apply it back to the our original fact-check dataset.",
"Specifically, we analyze the evolution of misinformation types over the last ten years and compare misinformation trends around major real-world events.",
"Evolution over the last ten years.",
"We first explore the evolution of misinformation over time.",
"We map each fact-check article with one or more corresponding misinformation types identified by our method, and then aggregate fact-checks by year from before 2010 11 to the end of 2020 to estimate the relative ratio of each misinformation type.",
"As shown in Figure 4, 12 the prevalence of certain misinformation types on Snopes.com has drastically changed over the last ten years.",
"Heavily politicized misinformation types, such as digitally altered or doctored images or photographs (cid:4) , fabricated and misleading content (cid:4) , and conspiracy theories (cid:4) have nearly doubled in relative ratios over the last ten years.",
"In contrast, the prevalence of (arguably) less politicized stories, such as legends and tales (cid:4) , hoaxes and pranks (cid:4) , and mistakes and errors (cid:4) have decreased.",
"These trends may be a proxy for the underlying prevalence of different misinformation types within the US.",
"Studies that measure political ideologies 11 Since there are relatively few fact-checks before 2010, we aggregate them together to the year 2010.",
"12 95% confidence intervals.",
"Additionally, the convenience offered by modern digital alteration software and applications (Korshunova et al., 2017; Westerlund, 2019) provides a gateway to proliferating manipulated images or photographs in the misinformation ecosystem.",
"Alternatively, these trends may reflect shifts in Snopes.com's priorities.",
"The website, launched in 1994, was initially named Urban Legends Reference Pages .",
"Since then it has grown to encompass a broad spectrum of subjects.",
"Due to its limited resources, fact-checkers from Snopes.com only cover a subset of online misinformation, and their priority is to fact-check whatever items the greatest number of readers are asking about or searching for at any given time (Snopes.com, 2021b). 13 Given the rising impact of political misinformation in recent years (Zannettou et al., 2019, 2020), such misinformation could reach an increasing number of Snopes.com readers, and therefore the website may dedicate more resources to fact-checking related types of misinformation.",
"Additionally, Snopes.com has established collaborations with social media platforms, e.g., Facebook (Green and Mikkelson), to specifically target viral misinformation circulating on these platforms, where the rising meme culture could also attract Snopes.com's attention and therefore explain a surge of digitally altered images (Ling et al., 2021; Wang et al., 2021).",
"13 Users can submit a topic to Snopes.com on its contact page (Snopes.com, 2021c), the results from which may affect Snopes.com's priorities.",
"2016 vs. 2020 US presidential election.",
"We now compare misinformation types between the 2016 and 2020 elections.",
"To filter for relevance, we constrain our analysis to fact-checks that (1) were published in the election years and (2) included the names of the presidential candidates and/or their running mates ( e.g., Joe Biden and Kamala Har-ris).",
"This results in 2,586 fact-checks for the 2016 election and 2,436 fact-checks for 2020.",
"The prevalence of each misinformation type is shown in Figure 5.",
"We observe that the relative ratios of many misinformation types are similar between the two elections, e.g., legends and tales (cid:4) and bogus scams (cid:4) , while the 2016 election has more hoaxes (cid:4) , satires (cid:4) , etc.",
"The most prevalent type during both elections is fabricated and misleading content (cid:4) , next to conspiracy theories (cid:4) .",
"H1N1 vs. COVID-19.",
"Finally, we compare misinformation types between the H1N1 pandemic in 2009 and the COVID-19 pandemic.",
"For H1N1 related fact-checks, we search for keywords flu, influenza, and H1N1 in fact-checks and constrain the publication date until the end of 2012.",
"14 For COVID-19 related fact-checks, we search for keywords COVID-19 and coronavirus, and only consider fact-checks published in 2019 or later, which results in 833 fact-checks for the H1N1 pandemic and 656 fact-checks for COVID-19.",
"The relative ratio of each misinformation type is also shown in Figure 5.",
"We observe that the prevalence of some misinformation types are sig-14 WHO declared an end to the global 2009 H1N1 pandemic on August 10, 2010, yet misinformation about H1N1 continues to spread (Sundaram et al., 2013), therefore we extend the time window by two more years.",
"Notably, the H1N1 pandemic has many more legends and tales (cid:4) , while COVID-19 has more conspiracy theories (cid:4) .",
"The increased prevalence of COVID-19 related conspiracies aligns with recent work measuring the same phenomena (Uscinski et al., 2020; Jolley and Paterson, 2020), especially as the COVID-19 pandemic becomes increasingly politicized (Hart et al., 2020; Rothgerber et al., 2020; Weisel, 2021).",
"Limitations and future directions.",
"We adopted a computational approach to investigate our research question, and this method inherently shares common limitations with observational studies, e.g., prone to bias and confounding (Benson and Hartz, 2000).",
"Specifically, our corpus contains fact-checks from Snopes.com, one of the most comprehensive fact-checking agencies in the US.",
"Snopes.com covers a broader spectrum of topics than politics-focused fact-checkers ( e.g., Politi-Fact.com, FactCheck.org), 15 and thus we argue that it covers a representative sample of misinformation within the US.",
"However, Snopes.com may not be representative of the international misinformation ecosystem (Ahinkorah et al., 2020; Kaur et al., 2018; Fletcher et al., 2018).",
"In the future, we hope that our method can help characterize misinformation comparatively on a global scale when more structured fact-checks become available.",
"16 Additionally, fact-checkers are time constrained, as thus the misinformation stories they cover tend to be high-profile.",
"Therefore low-prevalence, long-tail misinformation stories may not be observed in our study.",
"Understanding low-volume misinformation types may require a different collection of corpora other than fact-checks, e.g., a cross-platform investigation on social media conversations (Wilson and Starbird, 2020; Abilov et al., 2021).",
"Lastly, the misinformation types we extract from our weakly supervised approach are not validated with ground-truth labels.",
"This is largely due to the lack of empirical knowledge on misinformation types, and therefore we are unable to provide specific guidance to annotators.",
"Although the clusters in Figure 3 provide straightforward structure of misinformation stories, in future work, we plan to leverage these results to construct annotation guidelines and obtain human-identified misinformation types for further analysis.",
"Conclusion.",
"In this paper, we identify ten prevalent misinformation types with rationalized models on fact-checks and analyze their evolution over the last ten years and between notable events.",
"We hope that this paper offers an empirical lens to the systematic understanding of fine-grained misinformation types, and complements existing work investigating the misinformation problem.",
"This research was supported in part by NSF grant IIS-1553088.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF.",
"This paper uses Snopes.com fact-checks to train and validate our models, and also includes several quotes and snippets of fact-checks.",
"We consider our case a fair use under the US 17 copyright law, which permits limited use of copyrighted material without the need for permission from the copyright holder.",
"According to 17 U.S.C. 107, we discuss how our research abides the principles that are considered for a fair use judgment: Purpose and character of the use: we use fact-checks for noncommercial research purpose only, and additionally, using textual content for model training is considered to be transformative, cf.",
"Authors Guild, Inc. v. Google Inc. (2013, 2015, 2016).",
"Amount and substantiality: we present only snippets of fact-checks for illustrative purpose in our paper ( i.e., several quotes and snippets in text and figures), and only URLs to original fact-checks in our public dataset.",
"Effect upon work's value: we do not identify any adverse impact our work may have on the potential market ( e.g., ads, memberships) of the copyright holder.",
"The end goal of our research aligns with that of Snopes.com, i.e., to rebut misinformation and to restore credibility to the online information ecosystem.",
"We hope the aggregated knowledge of fact-checks from our models can shed light on this road and be a helpful addition to the literature."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"objective",
"abstain",
"method",
"result",
"result",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"method",
"result",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"We propose a neural event coreference model in which event coreference is jointly trained with five tasks: trigger detection, entity coreference, anaphoricity determination, realis detection, and argument extraction.",
"To guide the learning of this complex model, we incorporate cross-task consistency constraints into the learning process as soft constraints via designing penalty functions.",
"In addition, we propose the novel idea of viewing entity coreference and event coreference as a single coreference task, which we believe is a step towards a unified model of coreference resolution.",
"The resulting model achieves state-of-the-art results on the KBP 2017 event coreference dataset.",
"Event coreference resolution is the task of determining whether two event mentions in a document refer to the same real-world event.",
"For two event mentions to be coreferent, their triggers (i.e., the words realizing the occurrence of the events) should have the same subtype and their corresponding arguments (e.g., the times, places, and people involved) have to be entity-coreferent.",
"However, identifying potential arguments (which is performed by an entity extraction system), linking arguments to their event mentions (which is also performed by an event extraction system), and determining whether two event arguments are coreferent (which is the job of an entity coreference resolver), are all nontrivial tasks.",
"Hence, a key challenge in designing an event coreference resolver involves determining how to integrate these noisy components.",
"One of the most common approaches to event coreference resolution is pipelined approaches, where a trigger detection component, which identifies triggers and assigns event subtypes to them, is followed by an event coreference component, which clusters coreferent event mentions.",
"It should therefore not be surprising that errors propagate from the trigger detection component to the event coreference component.",
"To avoid aggravating this error propagation problem, knowledge provided by other information extraction (IE) components (e.g., entity coreference, event arguments) is typically employed as features for training event coreference models (Chen et al., 2009; McConky et al., 2012; Cybulska and Vossen, 2013; Araki et al., 2014; Liu et al., 2014; Peng et al., 2016; Krause et al., 2016; Choubey and Huang, 2017).",
"Oftentimes, these features provide limited improvements to event coreference models as they are too noisy to be useful.",
"Though less popular than pipelined approaches, bootstrapping approaches have been used for event coreference resolution, where an event coreference model is bootstrapped with models trained for one or more related IE tasks.",
"For instance, Lee et al. (2012) incrementally build clusters of coreferent event and entity mentions by iteratively bootstrapping event coreference output using entity coreference output and vice versa.",
"While in pipelined approaches only upstream tasks can influence downstream tasks, in bootstrapping approaches different tasks can influence each other.",
"Nevertheless, errors made in earlier iterations of the bootstrapping process cannot be undone in later iterations.",
"Joint learning approaches have recently emerged as promising approaches to event coreference owing to their ability to address error propagation.",
"In these approaches, two or more tasks are jointly trained.",
"For instance, Araki and Mitamura (2015) learn a joint model for trigger detection and event coreference using a structured perceptron, and Lu and Ng (2017) learn a joint model for trigger detection, event coreference, and anaphoricity determination using a structured conditional random field.",
"The key advantage of these models is that the tasks involved can benefit from each other during training.",
"However, since a jointly learned model involves multiple tasks, it is typically complex.",
"In fact, it is by no means easy to scale such a model to a large number of tasks because of the high computational complexity involved in learning.",
"Joint inference approaches have also been applied to event coreference resolution.",
"For instance, Chen and Ng (2016) and Lu et al. (2016) first train separate models for entity coreference, trigger detection, argument extraction, and event coreference, then use Integer Linear Programming or Markov Logic Networks to jointly infer the outputs of these tasks subject to (mostly) hard cross-task consistency constraints.",
"For instance, one such hard constraint says that two coreferent event mentions should have the same event subtype.",
"Since the models are trained independently, they cannot benefit from each other and could be noisy.",
"Worse still, performing joint inference using hard constraints over (very) noisy outputs could do more harm than good.",
"For instance, if two event mentions are correctly classified as coreferent but one of their subtypes is misclassified, then enforcing the aforementioned constraint might cause the joint inference procedure to incorrectly infer that the two are not coreferent.",
"This explains why joint inference approaches have become less popular than joint learning approaches in recent years.",
"In light of the above discussion, we seek to advance the state of the art in event coreference resolution by proposing a model that jointly learns six tasks: trigger detection, event coreference, entity coreference, anaphoricity determination, argument extraction, and realis detection.",
"As noted above, joint learning typically presents a serious computational challenge, and training a complex joint model involving six tasks would not have been possible without the advent of the neural NLP era.",
"While multi-task learning in a neural network typically allows the different tasks involved to benefit from each other via learning shared representations, we hypothesize that the model would benefit additional guidance given that the learning task, which involves six tasks, is so complex.",
"Consequently, we propose to guide the learning process by exploiting cross-task consistency constraints.",
"As mentioned above, such consistency constraints are typically employed in joint inference and rarely in joint learning.",
"Moreover, unlike in joint inference where such constraints are typically implemented as hard constraints, we provide flexibility by implementing them as soft constraints.",
"Specifically, we design penalty functions for penalizing outputs that violate a constraint, where the degree of penalty depends on the extent of the violation.",
"Another contribution of our work involves proposing the idea of a unified coreference model.",
"So far, entity and event coreference have always been viewed as two separate tasks, where links between entity mentions are distinguished from links between event mentions.",
"However, their similarity has led us to hypothesize that they could be viewed as a single task, where coreference links are established between a set of mentions without distinguishing between entity and event mentions.",
"Traditional resolvers.",
"Many existing event coreference resolvers, including those that employ the four approaches described in the introduction, are developed in the pre-neural era.",
"resolvers.",
"For a detailed overview of these non-neural resolvers and the wide variety of hand-engineered features they employ, we refer the reader to Lu and Ng (2018).",
"Neural resolvers.",
"Of particular relevance to our work are neural event coreference models (e.g., Nguyen et al. (2016), Choubey and Huang (2017, 2018), Huang et al. (2019)).",
"Unlike their traditional counterparts, neural coreference models can leverage the knowledge learned from large unlabeled corpora through pretrained word embeddings or transfer learning.",
"Existing neural event coreference models are pipeline-based and seek to learn word representations so that coreferent event mentions have similar word embeddings, effectively making the rather unrealistic assumption that an event trigger is composed of a single token (Nguyen et al., 2016).",
"In contrast, our neural resolver is a joint model and seeks to learn the representations of text spans, each of which corresponds to a candidate event trigger and may be composed of more than one token, so that coreferent event mentions have similar span representations.",
"Constrained learning in neural models.",
"Another line of related work concerns the use of constraints in neural models (Li et al., 2019; Wang et al., 2020), where constraints are represented as first order logic formulas and compiled into the loss functions.",
"These models are typically trained to minimize the weighted sum of task losses and constraint losses.",
"Rather than introduce additional terms in the loss function, we employ constraints as penalty terms when learning to score how likely two event mentions are coreferent, effectively making the two mentions less likely to be coreferent if {Two men} en 1 accused of {hacking} ev 1 {a British soldier} en 2 to {death} ev 2 last month appeared in {separate courts} en 3 for hearings.",
"{The men} en 4 , {Michael Adebolajo} en 5 , 28, and {Michael Adebowale} en 6 , 22, face {murder} ev 3 charges.",
"{Adebolajo} en 7 was also charged with other offenses, including the attempted {murder} ev 4 of {two police officers} en 8 .",
"Before formally defining the six tasks in the next section, we introduce several related definitions.",
"An event mention is an explicit occurrence of an event consisting of a textual trigger, arguments or participants (if any), and the event subtype , and can optionally be characterized by a set of attributes and their values.",
"An entity mention is an explicit mention of an entity in a text that has an entity type.",
"An event trigger is a string of text that most clearly expresses the occurrence of an event, usually a word or a multi-word phrase.",
"An event argument is an argument filler that plays a certain role in an event.",
"Realis denotes whether an event actually happened or will happen in the future, or whether it is a generic event.",
"Its value can be ACTUAL , GENERIC or OTHER .",
"An event/entity coreference chain is a group of event/entity mentions that refer to the same real-world event/entity.",
"To better understand these definitions, consider the example in Table 1.",
"The four event mentions ( ev 1 , ev 2 , ev 3 , ev 4 ) are triggered by hacking, death, murder, and murder respectively.",
"The first three have ACTUAL as their realis and the last one belongs to OTHER .",
"While ev 2 has LIFE _D IE as its subtype, the remaining ones all have subtype CONFLICT _A TTACK .",
"Among the eight entity mentions ( en 1 , . . . , en 8 ), en 3 has FACILITY as its type and the remaining ones are all PERSON s.",
"en 1 and en 2 are the arguments of ev 1 filling the roles of ATTACKER and TARGET respectively, whereas en 4 is the argument of ev 3 having the role ATTACKER .",
"There are two entity coreference chains (one composed of en 1 and en 4 and the other en 5 and en 7 ) and one event coreference chain ( ev 1 and ev 3 ).",
"We design a span-based neural model for event coreference resolution owing to its ability to effectively learn representations of text spans .",
"While span-based models have been successfully applied to a variety of entity-based IE tasks such as entity coreference (Lee et al., 2017; Joshi et al., 2020) and relation extraction (Luan et al., 2019), they have not been applied to event coreference.",
"More formally, our model takes as input a document D represented as a sequence of word tokens, from which we extract all possible intra-sentence spans of up to length L .",
"It simultaneously learns six tasks, which we define below.",
"The trigger detection task aims to assign each span i a subtype label y i .",
"Each y i takes a value in a subtype inventory or NONE , which indicates that i is not a trigger.",
"The model predicts i 's subtype to be y i = arg max y t s t ( i, y t ) , where s t is a scoring function suggesting i 's likelihood of having y i as its subtype.",
"The event coreference resolution task aims to assign span i an antecedent y c , where y c { 1 , . . . , i 1 , (cid:15) } .",
"In other words, the value of y c is the id of i 's antecedent, which can be one of the preceding spans or a dummy antecedent (cid:15) (if the event mention underlying i starts a new cluster).",
"We define the following scoring function: s c ( i, j ) = (cid:26) 0 j = (cid:15) s m ( i ) + s m ( j ) + s p ( i, j ) j (cid:54) = (cid:15) (1) where s m ( i ) is the score suggesting span i 's likelihood of being a trigger and s p ( i, j ) is a pairwise coreference score computed over span i and a preceding span j .",
"The model predicts the antecedent of i to be y c = arg max j Y ( i ) s c ( i, j ) , where Y ( i ) is the set of i 's candidate antecedents.",
"The entity coreference resolution task involves identifying entity mentions that refer to the same real-world entity.",
"Intuitively, entity coreference is useful for event coreference: two event mentions are not likely to be coreferent if there exists an argument role (e.g., ATTACKER ) for which the corresponding arguments in the two event mentions are not entity-coreferent.",
"In our model, it is defined in the same way as the event coreference resolution task except that it operates on the spans identified by the entity mention detection component rather than the trigger detection component.",
"The entity mention detection task is defined in the same way as the trigger detection task except that it aims to assign each span i an entity type label.",
"The anaphoricity determination task aims to assign each span i an anaphoricity label y a , where y a can be ANAPHORIC , which indicates that the mention having span i is coreferent with a preceding mention, or NON-ANAPHORIC .",
"The model s a ( i ) predicts the mention having span i as anaphoric if and only if s a ( i ) 0 .",
"To train this model, we set the target value to 1 for anaphoric mentions and 1 for non-anaphoric mentions.",
"Anaphoricity is useful for coreference: it prevents non-anaphoric mentions from being resolved.",
"The realis detection task aims to assign each span i a realis label y r , where y r {A CTUAL , GENERIC , OTHER , ENTITY , and NONE }.",
"As mentioned in Section 3, ACTUAL , GENERIC , and OTHER are labels used for event mention spans.",
"To enable every span i to be assigned a realis label, we augment the realis label set to include ENTITY and NONE .",
"Specifically, ENTITY is a label that is exclusively reserved for spans that correspond to entity mentions, and NONE indicates that i does not correspond to a mention.",
"The model predicts the realis type of i to be y r = arg max y r s r ( i, y r ) , where s r is a scoring function suggesting i 's likelihood of having realis type y r .",
"Realis detection is useful for event coreference: two event mentions cannot be coreferent if their realis labels are different.",
"The argument extraction task aims to assign an argument role label y o to a candidate argument k of a candidate event mention span i , where (1) k is a candidate entity mention span, and (2) y o is a role taken from an argument role inventory or NONE , which indicates that the token is not an argument of i .",
"We consider (1) k to be a candidate argument of i if and only if it appears within the same sentence as i ; and (2) a span to be a candidate event/entity mention span if it is assigned a non-N ONE event/entity type by the Mention Prediction Layer, which we will describe shortly.",
"For each candidate argument k of i , the model predicts its role in i to be y o = arg max y o s o ( i, k, y o ) , where s o is a scoring function suggesting token k 's likelihood of being an argument of i having role y o .",
"Arguments, when combined with entity coreference chains, would be useful for event coreference.",
"Span Representation Layer We adapt the inde-pendent version of Joshi et",
"al.'s (2019) state-of-the-art entity coreference resolver to event coreference resolution.",
"Specifically, we divide an input document into non-overlapping regions, each of which has size L d .",
"The word sequence in each region serves as an input training sequence.",
"We then pass the sequence into a pretrained transformer encoder used in SpanBERT-large (Joshi et al., 2020) to encode tokens and their contexts.",
"Finally, we set g i , the representation of span i , to [ h start ( i ) ; h end ( i ) ; h head ( i ) ; f i ] , where h start ( i ) and h end ( i ) are the hidden vectors of the start and end tokens of the span, h head ( i ) is an attention-based head vector and f i is a span width feature embedding.",
"To maintain computational tractability, we first compute a score s m for each span i : s m ( i ) = FFNN m ( g i ) (2) where FFNN is a standard feedforward neural network.",
"Then we retain only the top N % of the spans for further processing.",
"Trigger Prediction Layer For each span i that survives the filtering, we pass its representation g i to a FFNN, which outputs a vector ot i of dimension T , where T is the number of possible event subtypes (including NONE ).",
"ot i ( y ) , the y th element of ot i , is a score indicating i 's likelihood of belonging to event subtype y .",
"Specifically: ot i = FFNN t ( g i ) (3) s t ( i, y ) = ot i ( y ) (4) Anaphoricity Prediction Layer We predict the anaphoricity value of each top span i as follows.",
"Since the anaphoricity of a mention is dependent on its preceding context, we first concatenate the average of the representations of the 25 tokens immediately preceding i (to approximate i 's preceding context) with the span representation g i .",
"We then pass the resulting vector, cx i , to a FFNN, which outputs an anaphoricity value.",
"Specifically: s a ( i ) = FFNN a ( cx i ) (5) Realis Prediction Layer To predict the realis value of each top span i , we pass its representation g i to a FFNN, which outputs a vector or i of length 5.",
"or i ( y ) , the y th element of or i , is a score indicating i 's likelihood of having realis type y : or i = FFNN r ( g i ) (6) Figure 1: Model structure.",
"Coreference Prediction Layer To predict event coreference links, we define the pairwise score between span i and span j as follows:",
"where denotes element-wise multiplication, g i g j encodes the similarity between i and j , and u ij is a feature embedding encoding the distance between them.",
"We can then compute the full coreference score defined in Equation 1 using Equations 2 and 8.",
"To improve running time, we follow Lee et al. (2018) and use their antecedent pruning method, coarse-to-fine pruning, to reduce the number of candidate antecedents for each anaphor.",
"Incorporating Entity Coreference The most straightforward way to incorporate entity coreference information into our model would be to have (1) an entity mention detection model that is architecturally identical to the trigger detection model except that it assigns entity type (rather than event subtype) labels to each span, and (2) an entity coreference model that is architecturally identical to the event coreference model described above except that it identifies antecedents for spans provided by the entity mention detection (rather than trigger detection) component.",
"While this would allow entity coreference to interact with event coreference and other tasks via the shared Span Representation Layer, the two coreference tasks would otherwise be learned independently of each other.",
"to learn entity and event coreference simultaneously by viewing them as a single coreference task.",
"From a learning perspective, there is only one task to be learned, which is coreference resolution over a set of mentions.",
"To do so, we extend the Span Representation Layer, the Trigger Prediction Layer, and the Coreference Prediction Layer as follows.",
"First, the Span Representation Layer will identify spans corresponding to mentions that are composed of both entity mentions and event mentions even though the model doesn't know (and doesn't need to know) which ones are entity mentions and which ones are event mentions.",
"Second, the Trigger Prediction Layer will assign each mention span a semantic type, which is taken from a type inventory consisting of both entity types and event subtypes (and NONE , if the span is not a mention).",
"In other words, the Trigger Prediction Layer, which is essentially extended to a Mention Prediction Layer, now extracts both entity and event mention spans.",
"Third, the Coreference Prediction Layer computes coreference chains based on the predicted mention spans and their semantic types.",
"Since all the learner sees are mentions, it doesn't know (and doesn't need to know) which coreference chains it computes are entity-based and which ones are event-based.",
"Similarly, it doesn't know (and doesn't need to know) which types in the type inventory are entity types and which ones are event subtypes.",
"A key advantage of this unified model of coreference is that it allows entity and event coreference to be tightly coupled via parameter sharing.",
"it identifies are entity-based and which ones are event-based.",
"This can be done easily based on the semantic type associated with the mentions underlying the extracted coreference relation under consideration.",
"If the semantic type is an entity type, the corresponding coreference relation is regarded as an entity coreference relation; otherwise, it is regarded as an event coreference relation.",
"Argument Prediction Layer To predict arguments and their roles, we pair each top span i and each candidate argument k to form an input vector va ik = [ g i ; t i ; g k ; t k ] , where g i is the span representation of i , t i is the one-hot subtype vector of i , g k is the span representation of argument candidate k , and t k is the one-hot subtype vector of k .",
"During training, we use the gold subtype label to derive the subtype vector.",
"During inference, we derive the subtype vector from the output of the Mention Detection Layer.",
"We feed the resulting vector into a FFNN, which outputs a vector oa ik of dimension 21.",
"oa ik ( y ) , the y th element of oa ik , is a score indicating k 's likelihood of being an argument of i with role y : oa ik = FFNN oa ( va ik ) (9) s o ( i, k, y ) = oa ik ( y ) (10) Incorporating Consistency Constraints As noted before, we propose to guide the learning process by incorporating commonsense knowledge that encodes cross-task consistency constraints on coreference and the auxiliary tasks.",
"We begin by incorporating two consistency constraints on the outputs of coreference and mention detection: C1 : If two spans are coreferent, they should have the same semantic type.",
"C2 : If a span has an antecedent that is not the dummy antecedent, its semantic type shouldn't be NONE .",
"We incorporate each constraint into the model via a scoring function that computes how much two spans i (an anaphor) and j (a candidate antecedent of i ) should be penalized if a constraint is violated.",
"For constraint C1 , we define a cost function, c 1 , which is computed as follows: c 1 ( i, j ) = min( | s t ( i, y i ) s t ( i, y j ) | , | s t ( j, y j ) s t ( j, y i ) | (11) where y i = arg max y t s t ( i, y t ) and y j = arg max y t s t ( j, y t ) .",
"Intuitively, c 1 provides an estimate of the least amount of adjustment needed to make i 's semantic type the same as j 's or the other way round.",
"penalty) if the two spans have the same type.",
"Similarly, for constraint C2 , we define a cost function c 2 , which is computed as follows: c 2 ( i,j ) = 0 argmax y Y s t ( i,y ) (cid:54) = None s t ( i, None) max y Y\\{ None } s t ( i,y ) otherwise (12) where Y is the set of possible types.",
"Intuitively, c 2 estimates the minimum amount that needs to be adjusted so that anaphor j 's type is not NONE .",
"Finally, we incorporate c 1 and c 2 into the model as penalty terms in s c (Equation 1).",
"Specifically, we redefine s c as follows: s c ( i,j ) = (cid:26) 0 j = (cid:15) s m ( i )+ s m ( j )+ s p ( i,j ) [ 1 c 1 ( i,j )+ 2 c 2 ( i,j )] j (cid:54) = (cid:15) (13) where 1 and 2 are positive constants that control the hardness of the constraints.",
"The smaller a i is, the softer the corresponding constraint is.",
"Intuitively, if a constraint is violated, s c ( i, j ) will be lowered by one or more of the penalty terms, and j will less likely be selected as the antecedent of i .",
"In addition, we enforce the following consistency constraints.",
"Like C1 and C2 , each of them will be accompanied by a cost function that will eventually be incorporated into s c as a penalty term.",
"Coreference and anaphoricity.",
"C3 : If a span's antecedent is not the dummy antecedent, its anaphoricity value should be ANAPHORIC .",
"C4 : If a span has a dummy antecedent, its anaphoricity value should be NON-ANAPHORIC .",
"Coreference and realis detection.",
"C5 : If two spans are coreferent, they should have the same realis value.",
"C6 : If a span's antecedent is not the dummy antecedent, its realis value should not be NONE .",
"Coreference and argument extraction.",
"C7 : If two event mention spans are coreferent, their same-role arguments, if any, should be entity-coreferent.",
"The loss function we use, L () , is composed of the losses of the six tasks, and is defined as follows:",
"where the hyperparameters (i.e., the 's) determine the trade-off between the task losses.",
"The model is trained to minimize L () , whereas the hyperparameters are tuned using grid search to maximize AVG-F (the standard event coreference evaluation metric; see the next section) on development data.",
"Task Losses We employ a max-margin loss for each of the six tasks.",
"Defining the coreference loss is slightly tricky since the coreference annotations for each document are provided in the form of clusters.",
"We adopt the coreference loss function previously defined by Wiseman et al. (2015) for entity coreference resolution.",
"Specifically, let GOLD c ( i ) denote the set of spans preceding span i that are coreferent with i , and y lc be arg max y GOLD c ( i ) s c ( i, y ) .",
"In other words, y lc is the highest scoring (latent) antecedent of i according to s c among all the antecedents of i .",
"The loss function for coreference is defined as: L c () = n (cid:88) i =1 max j Y ( i ) ( c ( i, j )(1+ s c ( i, j ) s c ( i, y lc )) (15) where c ( i, j ) is a mistake-specific cost function that returns the cost associated with a particular type of error (Durrett and Klein, 2013).",
"1 Intuitively, the loss function penalizes a span i if the predicted antecedent j has a higher score than the correct latent antecedent y lc .",
"where t ( i, l ) is a mistake-specific cost function that returns the cost associated with a particular type of error.",
"1 Intuitively, the loss function penalizes each span for which each of the wrong subtypes l has a higher score than the correct subtype y t according to s t .",
"The task losses for anaphoricity determination, realis detection, and argument extraction are all max-margin losses that are defined similarly as the one used for trigger detection.",
"We perform training and evaluation on the English corpora used in the TAC KBP 2017 Event Nugget Detection and Coreference task.",
"There are no official training sets: the task organizers simply made available a number of event coreference-annotated corpora for training.",
"We use LDC2015E29, E68, E73, E94, and LDC2016E64 as our training set, which contain 817 documents 1 Space limitations preclude a description of these error types.",
"with 22894 event mentions distributed over 13146 coreference chains 2 .",
"Among these 817 documents, we reserve 82 documents for parameter tuning and use the remaining documents for model training.",
"We report results on the official test set, which consists of 167 documents with 4375 event mentions distributed over 2963 coreference chains.",
"Results of event coreference, trigger detection and realis detection are obtained using version 1.8 of the official scorer provided by the KBP 2017 organizers.",
"For event coreference, the scorer employs four scoring metrics, MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998), CEAF e (Luo, 2005) and BLANC (Recasens and Hovy, 2011), as well as the unweighted average of their F-scores (AVG-F).",
"Results of trigger detection and realis detection are both expressed in terms of Precision (P), Recall (R) and F-score.",
"The scorer considers (1) a trigger correctly detected if it has an exact match with a gold trigger in terms of boundary and event subtype, and (2) a realis label correctly classified if it has an exact match with a gold trigger in terms of boundary and realis value.",
"Additionally, we express results of both argument extraction and anaphoricity determination in terms of Precision, Recall and F-score.",
"We consider an event argument correctly extracted if it has an exact match with a gold trigger-argument pair in terms of trigger boundary, event subtype, argument head and argument role.",
"We consider an anaphoric mention correct if it has an exact match with the boundary of a gold anaphoric mention.",
"Finally, we report entity coreference results in terms of CoNLL score, which is the unweighted average of MUC, B 3 , and CEAF e .",
"We use the SpanBERT-large model in the Span Representation Layer.",
"3 For each document, we split it into segments of length 512 and generate all spans of length up to 10.",
"Each FFNN has one hidden layer of size 2000.",
"The size of the width feature embedding is 20.",
"For span pruning, we keep the top 50% of the spans.",
"For candidate antecedent pruning, we keep the top 15 antecedents.",
"Event Coreference Trigger Anaphoricity Realis Argument EntityCoref.",
"MUC B 3 CEA BLA AVG P R F P R F P R F P R F CoNLL Jiang et al. (2017) 30.6 43.8 39.9 27.0 35.3 56.8 55.6 56.2 48.0 46.9 47.4 Huang et al. (2019) 35.7 43.2 40.0 32.4 36.8 56.8 46.4 51.1 Lu and Ng (2020) 37.1 44.5 40.0 29.9 37.9 64.5 46.9 54.3 Knowledge-lean 37.6 52.3 51.7 33.6 43.8 71.5 55.3 62.4 Pipeline 38.6 53.0 53.0 35.0 44.9 73.9 56.1 63.8 43.0 44.5 43.8 70.0 53.1 60.3 36.9 29.9 33.0 72.6 Full Joint 45.2 54.7 53.8 38.2 48.0 71.6 58.7 64.5 50.4 45.3 47.7 63.7 52.0 57.3 32.4 24.5 27.9 68.7 Table 2: Results of different resolvers on event coreference and related tasks.",
"For training, we use document sized mini-batches and apply a dropout rate of 0.3.",
"Following Joshi et al. (2019), we use different learning rates for training the task parameters and the SpanBERT parameters.",
"Specifically, the task learning rate is 1 10 5 and is decayed linearly, whereas the learning rate for SpanBERT is 2 10 4 and is decayed linearly.",
"The hyperparameters in the loss function, c , t , a , r , and o , are 1, 1, 0.05, 0.5, and 0.05.",
"Results are shown in Table 2.",
"To gauge the performance of our model, we employ five baselines.",
"Row 1 shows the results of our first baseline, Jiang et",
"al.'s (2017) resolver, which is the highest-scoring system participating in KBP 2017.",
"Rows 2 and 3 show the performance of our next two baselines, a neural resolver (Huang et al., 2019) and a non-neural resolver (Lu and Ng, 2020) that have achieved the best results to date on the KBP 2017 test set.",
"Hence, these three baselines can be viewed as the prior state of the art.",
"As we can see, while Jiang et al. have the best trigger detector (56.2 F-score), the best event coreference performance is achieved by Lu and Ng's resolver (37.9 AVG-F).",
"Row 4 shows our fourth baseline, which is our model except that (1) three prediction layers (argu-ment, realis, and anaphoricity) are removed, and (2) the remaining layers are trained to identify event mentions only (i.e., without entity mentions).",
"This baseline mimics typical knowledge-lean approaches to event coreference resolution, which perform only trigger detection and event coreference, but is the first knowledge-lean event coreference approach implemented in a span-based framework.",
"As we can see, this baseline outperforms Lu and Ng's resolver by 5.9% points in AVG-F for event coreference.",
"A closer inspection of the coreference evaluation metrics reveals that in comparison to Lu and Ng, this baseline's B 3 , CEAF e and BLANC scores increase substantially while its MUC score barely changes.",
"Since MUC only rewards successful identification of coreference links, the fact that the MUC score is more or less unchanged implies that the improvement does not arise from link identification; rather, the fact that the B 3 , CEAF e and BLANC scores improve suggests that the improvement arises from successful identification of sin-gleton clusters.",
"This is further supported by the improvement in trigger detection: the baseline's trigger detection module achieves an F-score of 62.4, outperforming Lu and Ng's trigger detection module by 8.1% points in F-score.",
"This huge improvement should not be surprising, as SpanBERT is specifically designed to extract text spans.",
"Overall, despite the encouraging 6%-point improvement in event coreference AVG-F score, we cannot say that the successes of span-based models on entity coreference can be extended to event coreference as it largely fails to establish event coreference links.",
"Row 5 shows the result of our fifth baseline, which is a pipelined version of our model designed to gauge the benefits of our joint model.",
"Here, we first train a trigger detector, which is the same as the Mention Prediction Layer of our model trained to assign event subtypes to top spans.",
"The resulting triggers are used to train an anaphoricity model (same as our model's Anaphoricity Prediction Layer) and a realis detection model (same as our model's Realis Prediction Layer).",
"Next, we train an entity coreference model, which is the same as our third baseline except that it is trained to operate on entity rather than event mention spans.",
"Then, we train an argument extraction model (same as our model's Argument Prediction Layer) using the extracted entity mentions as candidate arguments for the triggers identified by the trigger detection model.",
"Finally, the outputs of these models are used to enforce the seven constraints in our model as hard constraints: any candidate antecedent of an anaphor that violates any of the constraints is filtered prior to event coreference resolution.",
"Overall, this baseline outperforms the fourth baseline by 0.6% points in AVG-F for event coreference and 1.4% points in F-score for trigger detection.",
"Row 6 shows the result of our full model, which outperforms the Pipeline model by 3.1% points in AVG-F for event coreference and establishes new state-of-the-art results.",
"Encouragingly, the gains in AVG-F are accompanied by improvements w.r.t. all four coreference scoring metrics.",
"In particular, the MUC score improves considerably by 6.6% points, which means that the full model has successfully identified event coreference links.",
"In addition, we see a 0.7% point improvement in trigger detection over Pipeline, and a 12.9% point improvement in realis detection in comparison to Jiang et al.",
"For bookkeeping purposes, we also report the scores for each component of our model.",
"Overall, the fact that our joint model outperforms Pipeline suggests the benefits of joint modeling.",
"To evaluate the contribution of the different components in our model, we show in Table 3 ablation results, which we obtain by removing one component at a time from the model and retraining it.",
"Consistency constraints.",
"Ablating the consistency constraints means removing all the penalty terms from s c .",
"The ablated system resembles what one would usually see in a multi-task learning setup, where the different tasks involved has a shared representation.",
"As we can see from row 2, event coreference performance drops by 1% point, suggesting the usefulness of using consistency constraints in a multi-task setup.",
"While it is perhaps not surprising that the consistency constraints have the largest impact on event coreference performance, it is somewhat interesting to see that there is one task whose performance improves when consistency constraints are ablated, realis detection.",
"Entity coreference.",
"Next, we ablate the entity coreference component.",
"The ablation of entity coreference necessitates the removal of the argument extraction component and the associated constraints since the latter relies on the outputs of entity coreference.",
"We see from row 3 that event coreference performance drops precipitously by 2.7% points.",
"This suggests that entity coreference has a considerable positive impact on event coreference.",
"in a typical multi-task setup?",
"As we can see from row 4, the performances of event coreference and entity coreference drop by 0.8% points and 3% points respectively.",
"These results suggest that our viewing the two tasks as a single task is beneficial.",
"Anaphoricity determination.",
"Next, we ablate the anaphoricity component, which involves removing both its task loss and the associated constraints.",
"From row 5, we see that event coreference performance drops by 0.5% points, and anaphoricity determination performance drops 0.8% points.",
"Realis detection.",
"When we ablate realis detection, both the task loss and the associated consistency are removed.",
"The performances of event coreference and anaphoricity drop precipitously, by 1.4% points and 1.0% point respectively, suggesting the usefulness of realis detection for both event coreference and anaphoricity detection.",
"Argument extraction.",
"Finally, when the argument extraction component is ablated, event coreference performance drops by 0.6% points.",
"These results illustrate the importance of argument extraction for event coreference.",
"Overall, these results suggest that each component contributes positively to event coreference.",
"We proposed the first neural model for event coreference resolution that (1) jointly learned six tasks, (2) used consistency constraints to guide learning, and (3) viewed entity and event coreference as a single task.",
"Our model outperformed several strong baselines and achieved state-of-the-art results on the KBP 2017 event coreference dataset.",
"We thank the three anonymous reviewers for their detailed and insightful comments on an earlier draft of the paper.",
"This work was supported in part by NSF Grants IIS-1528037 and CCF-1848608."
] | [
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"method",
"method",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other"
] |
[
"Despite the success of contextualized language models on various NLP tasks, it is still unclear what these models really learn.",
"In this paper, we contribute to the current efforts of explaining such models by exploring the continuum between function and content words with respect to contextualization in BERT, based on linguistically-informed insights.",
"In particular, we utilize scoring and visual analytics techniques: we use an existing similarity-based score to measure contextualization and integrate it into a novel visual analytics technique, presenting the model's layers simultaneously and highlighting intra-layer properties and inter-layer differences.",
"We show that contextualization is neither driven by polysemy nor by pure context variation.",
"We also provide insights on why BERT fails to model words in the middle of the functionality continuum.",
"The rise of contextualized language models (LM), i.e., contextualized word and sentence representations, such as ELMO (Peters et al., 2018) and BERT (Devlin et al., 2019), has brought many wellknown NLP tasks to a tremendous breakthrough.",
"Contextualized embeddings have replaced earlier static embeddings (Mikolov et al., 2013; Pennington et al., 2014; Conneau et al., 2017), creating new standards for the state-of-the-art.",
"LMs have learned highly transferable and task-agnostic properties of language (e.g., Belinkov, 2018; Conneau et al., 2018; Peters et al., 2018), even to a degree of imitating the classical NLP pipeline (Tenney et al., 2019a).",
"Despite these research efforts, it remains yet unclear as to what extent LMs like BERT capture complex linguistic phenomena and whether different linguistic properties are learned Contribution to the visualization part.",
"across the different layers of the model's architecture: the existing evidence is conflicting and in some cases even contradictory (Rogers et al., 2020).",
"One recent line of work (Ethayarajh, 2019) explores the actual contextualization captured in these models, i.e., the degree to which a word is modeled as context-specific.",
"This sheds light on the context-specificity of individual words and the degree of contextualization of different word groups.",
"This paper contributes to this line of work by examining the degree of contextualization of function vs. content words.",
"We treat functionality as a continuum, comparing and contrasting BERT's (De-vlin et al., 2019) modeling of categories of words within this continuum with the expected modeling according to the theoretical linguistic literature.",
"It has been repeatedly shown that LMs fail to generalize and capture the compositionality of language because they struggle with words of high functionality, e.g., quantifiers, prepositions, modals, conjunctions (Dasgupta et al., 2018; Naik et al., 2018; McCoy et al., 2019, to name only a few).",
"Thus, our linguistically-informed analysis sheds light on the peculiarities of these phenomena and contributes to our better understanding of BERT.",
"This paper utilizes the self-similarity contextualization score of Ethayarajh (2019) for better comparability.",
"The exploration of the scores and phenomena is enabled by LMExplorer , a visual analytics (VA) technique for the layer-wise explanation of contextualized word embeddings.",
"LMExplorer contributes a new perspective on the learned patterns of the model, and shows clusters and score developments in the model's layers simultaneously.",
"Overall, the contribution of this paper is twofold: (1) we generate insights as to how BERT captures function vs. content words (Sections 4 and 5), and (2) present a novel visual analytics technique that facilitates such insights by explaining LMs through contextualization scoring (Section 3).",
"Research on the interpretability of LMs has been pursued in two main directions, mainly focusing on BERT.",
"For one, probing tasks are used to investigate the linguistic properties learned by the LM by training a linear model on the basis of the corresponding contextualized embeddings for the prediction of specific linguistic properties.",
"For another, the interpretability of LMs has been explored via adversarial datasets to assess the performance of an LM with respect to challenging linguistic phenomena.",
"To further explore the interpretablity of LMs, we see work coming from the field of VA as promising.",
"VA techniques have been used extensively for exploring and interpreting different deep learning models (Hohman et al., 2019), incl.",
"LMs.",
"Probing Probing experiments have shown that BERT's transformer architecture encodes semantic information such as word senses and semantic roles (Reif et al., 2019; Tenney et al., 2019b; Ettinger, 2020; Zhao et al., 2020), syntactic information in the form of constituents and hierarchical structure (Goldberg, 2019; Hewitt and Manning, 2019; Warstadt and Bowman, 2020; Chi et al., 2020), morphosyntactic and morphological features (Edmiston, 2020; Tenney et al., 2019b), and discourse-related information necessary for tasks such as coreference resolution (Tenney et al., 2019b).",
"Moreover, the traditional NLP pipeline sequence of POS tagging, syntactic parsing, named entity recognition, semantic role labeling and coreference resolution can be mapped onto BERT's transformer layers from lower to higher (Tenney et al., 2019a).",
"Accordingly, several probing studies have shown that BERT captures a hierarchy of linguistic information (e.g., Jawahar et al., 2019; Lin et al., 2019; Edmiston, 2020): surface features are represented best in the lower layers, while syntactic features are captured best in the middle layers.",
"The middle to higher layers represent morphological features best, and semantic information is captured best in the higher layers.",
"Adversarial Testing Adversarial testing has shown that LMs struggle in making generalizations on basic lexical relations (Glockner et al., 2018), identifying ungrammaticality (Marvin and Linzen, 2018), efficiently capturing challenging linguistic phenomena, such as negation (Dasgupta et al., 2018; Richardson et al., 2020), modals, quantifiers and monotonicity (Richardson et al., 2020), passives (Zhu et al., 2018), conditionals (Richardson et al., 2020), conjunctions (McCoy et al., 2019), implicatives and factives (McCoy et al., 2019), and modeling human reasoning patterns, such as numerical or common-sense reasoning (Naik et al., 2018).",
"Overall, the evidence from adversarial testing contradicts the results of the probing studies: if the LM indeed is able to acquire deep' linguistic knowledge (e.g., about syntactic hierarchies), it should be able to deal with the phenomena present in the adversarial test sets.",
"Contextualization Despite the conflicting evidence about the linguistic capacities of LMs like BERT, it is widely acknowledged that the word embeddings generated by such models are contextualized, i.e., there is no finite number of word sense representations and a word has different vector representations across different contexts.",
"Particularly, by assessing a word's contextualization on the basis of self-similarity scores, Ethayarajh (2019) shows that the embeddings become more contextualized, i.e., more context-specific, in the upper layers of BERT.",
"Moreover, it has been shown that contextualized embeddings generally cluster with one another with respect to word senses (Reif et al., 2019; Wiedemann et al., 2019).",
"Visual LM Explanations Approaches for visual LM explanations can be grouped into two main categories.",
"One strand of research focuses on transformer-based LMs and explains how they learn through visualizing attentions (e.g., NL-IZE (Liu et al., 2018), Seq2Seq-Vis (Strobelt et al., 2018), BertViz (Vig, 2019), exBERT (Hoover et al., 2020), SANVis (Park et al., 2019), and Attention Flows (DeRose et al., 2021)).",
"Another strand of research explains what the model learns by visualizing word embeddings.",
"Although most existing work on embedding explanation is based on probing tasks, visualization of embedding characteristics has emerged as an active research topic.",
"The first tools were related to the exploration of static embeddings, e.g., by Liu et al. (2017), who visualize word2vec and Glove embeddings, focusing on analogy exploration.",
"Heimerl and Gleicher (2018) explain the same models and present visualizations that support analysis of multiple tasks, among others, the analysis of local word neighborhoods.",
"Also, Boggust et al. (2019) explain static embeddings of word2vec, Glove, and fastText.",
"Their explanations focus on local neighborhoods visualized using small multiples by applying a dimensionality reduction.",
"Berger (2020) has recently presented a Figure 1: The main visualization of our technique uses layer-wise interlinked-projections that show embeddings from the layers of an LM in a 2D space; here, BERT's 12 layers.",
"visual approach for exploring correlations between embedding clusters in BERT for a single model's layer at a time.",
"The novelty of our approach is the explanation of contextualized word embeddings through contextualization scores that are visualized for all of the model's layers simultaneously.",
"To support the analysis of word contextualization within the functionality continuum, we have developed a VA technique called LMExplorer .",
"This technique discloses layer-wise spatial and score-based patterns in the learned embedding representations.",
"Using interlinked embedding projections, we show the spatial relations of the high-dimensional embedding space.",
"To provide further insights into the word contextualization, the technique utilizes scoring functions (i.e., word self-similarity ) as a contextualization explanation.",
"The scores are used to explore and navigate the embedding space, which is facilitated by supporting views and interactions.",
"The technique is integrated into the lingvis.io framework (El-Assady et al., 2019a).",
"Task Analysis The technique is designed to support model analysts in gaining insights into the word contextualization.",
"The proposed design is informed by a set of tasks that were obtained through investigating the analysts in their typical analysis workflow.",
"These are: ( T1 ) Analyze spatial structure of the embedding space; ( T2 ) Gain a global overview of the corpus; ( T3 ) Conduct interactive pattern analysis; ( T4 ) Create user-defined word groupings for detailed inspection; and ( T5 ) Conduct a focused analysis of contextualization.",
"The main visual components of our technique are layer-wise interlinked projections (Figure 1) a novel visualization displaying layers of the LM simultaneously for effective spatial pattern analysis.",
"Motivation The design of this visualization was informed by T1 and T2 , i.e., corpus level exploration of embedding spatial patterns in different layers of the LM.",
"Projection-based visualizations are the most common methods to visualize word embeddings (e.g., Smilkov et al., 2016; Liu et al., 2017; van Aken et al., 2019; Aken et al., 2020) and although some approaches have enabled the exploration of embeddings in different layers (e.g., Smilkov et al., 2016; van Aken et al., 2019; Aken et al., 2020), they typically visualize only one layer of the LM at a time.",
"However, changes in embedding positions and their neighborhoods across layers can be an indicator of the model capturing new context information.",
"To support such analyses, our technique displays the embeddings for all layers of the LM simultaneously and visually highlights changes in their neighborhoods.",
"Design Rationale To implement the exploration of such spatial patterns, we use a dimensionality reduction technique on the computed embedding vectors from each layer of the LM.",
"In particular, we reduce the 768-dimensional embedding vectors to two dimensions, used as x and y coordinates to visualize words in one layer.",
"Using this technique, words with similar embeddings are represented by similar coordinates in the 2D space.",
"In total, 12 projections are created, each representing one layer of the BERT-base model.",
"The projections are ordered vertically underneath each other, starting from layer one at the very top and ending with the last layer at the bottom.",
"The words in the projection are visualized as shapes.",
"By default, they are displayed as circles and colored according to the word's position in the 2D space,",
"cf., El-Assady et al. (2019b).",
"After displaying the projections, we add connecting lines between layers to support the analysis of word position changes in the visualized space.",
"To reduce the number of crossing edges, we additionally apply an edge-bundling technique that combines neighboring edges in a more coherent representation.",
"An example of the visualization is shown in Figure 1.",
"In our approach, both contextualized word embeddings and aggregated word embeddings (i.e., average or median embedding of all contexts of a word) can be visualized.",
"The words in each projection (i.e., layer) are represented by different embedding vectors.",
"Hence, although we visualize the same words, the consecutive projections differ and may even get rotated or flipped due to artefacts that are common for most of the dimensionality reduction techniques (e.g., UMAP (McInnes et al., 2018), t-SNE (Van der Maaten and Hinton, 2008)).",
"Even if words maintained their neighborhoods, the rotation of the projections would prevent the users from easily comprehending on embedding positional changes.",
"Thus, to prevent such artifacts, we apply an extension of UMAP called AlignedUMAP .",
"It reduces the rotation artifacts by using the already projected data as an anchoring .",
"Hence, we project the embeddings from layer 2 by specifying relations to the projection of embeddings from layer 1, and iterate this alignment process up to the last layer.",
"This spatialization concept enables an effective layer comparison as well as the detection of word groups with similar spatial patterns ( T1 , T2 ).",
"The interlinked projections benefit the analysis of word functionality across layers, especially in the exploratory phase of the analysis.",
"The user can brush neighboring words in the projection to gain an overview of word groups that are relevant to observe in detail.",
"To support hypothesis generation and testing, we provide multiple interaction techniques that help explore the analyzed corpus.",
"When hovering over a word in the projection, the word and its path through the different layers gets highlighted ( T3 ) and its contexts are displayed for close-reading.",
"To ease the analysis of words with common spatial patterns, the user can brush a group of neighboring words in the projection and drag them aside.",
"This reduces the displayed information and supports a more detailed pattern analysis ( T4 ).",
"We employ common approaches in explaining contextualization and compute multiple word-level contextualization scores.",
"These are integrated into the interlinked-projection view as an overlay ( T5 ).",
"Scoring Functions To explain the contextualization of a word's representation, Ethayarajh (2019) introduces three metrics: self-similarity , maximum explainable variance , and intra-sentence similarity .",
"In this paper, we focus on the word self-similarity , which Ethayarajh describes as the average cosine similarity of a word with itself across all the contexts in which it appears, where representations of the word are drawn from the same layer of a given model.",
"Although the analysis in this paper is solely based on the self-similarity score, the technique can be effortlessly extended to further explanation scores.",
"For instance, we have explored the word's contextualization also by defining a baseline embedding and obtaining its similarity to the contextualized one.",
"It is possible to create multiple baselines by either reducing the context size (e.g., extracting embedding from a word without a surrounding context) or selecting a specific layer of the LM for reference.",
"Ethayarajh (2019) describes the 0 th layer as an appropriate baseline.",
"However, for specific hypothesis testing, one could even select one of the upper layers as a reference layer.",
"Score Overlay The scores are mapped to the words in the interlinked-projection view to provide further insights into the embedding contextualization.",
"In particular, we use three visual design elements:",
"(a) color,",
"(b) shape, and",
"(c) size.",
"First, we use a diverging color scale that maps the scores from brown (min value) to green (max value) colors.",
"Second, we highlight words having extreme values (i.e., one standard deviation above the min value and below the max value of the score's distribution in the particular layer ) by displaying them as rectangles instead of the default circles.",
"Third, we map the score's range across all layers of the model to the shape's size, supporting layer comparison (shown in Figure 4).",
"To support the exploration of words with common characteristics (e.g., spatial patterns), we provide supporting visualizations and interactions.",
"(a) The range of contextualization scores for all words in different layers are displayed in distribution plots , supporting layer comparison.",
"(b) Words can be filtered and highlighted in the projection by specifying a score's range.",
"The distribution plots provide an overview of the embedding contextualization scores (i.e., self-similarity ) and are placed next to the corresponding layer projection.",
"They enable the analysis of score changes through the model's layers.",
"As shown in Figure 2a, the self-similarity score decreases in upper layers, and the standard deviation increases accordingly.",
"The distribution plots can be further used for filtering words by specifying a range in the contextualization score (shown in Figure 2b).",
"Words that fit within the range are highlighted in the interlinked-projection view .",
"For tailored score-pattern analysis, we display the score changes in an additional, more compact matrix plot visualization (shown in Figure 3).",
"The columns of the matrix represent words in the corpus, and rows show the layer-wise contextualization scores.",
"The user can define a query by selecting a word in the matrix plot and the words with similar patterns (i.e., the response of the query) are highlighted in the interlinked-projection view .",
"To obtain similar patterns, we first represent each word by a vector of 12 score values corresponding to each layer for BERT-base.",
"We then compute the cosine similarity on these vectors to retrieve words with similar score patterns.",
"While Ethayarajh (2019) initially found that the increase in contextualization across the different BERT layers (i.e., the decreasing self-similarity ) seems to be driven by polysemy, stopwords' such as and , of , the and to seem to contradict this conclusion.",
"Stopwords, which in essence are function words, also become increasingly contextualized in the upper layers.",
"Thus, contextualization seems not to be entirely driven by polysemy, but rather the variety of contexts a word appears in (Ethayarajh, 2019).",
"However, function words are not a homogeneous class, and some function words indeed have semantic content in addition to having a grammatical function.",
"Thus, we decided to investigate function and content words in more detail, using the LMExplorer to explore contextualization in BERT with respect to the functionality continuum.",
"In theoretical linguistics, there is a traditional distinction between function and content words.",
"Several criteria have been proposed to distinguish between the two groups, e.g., semantic content, membership openness, flexibility of syntactic attachment, separability from complements (Corver and van Riemsdijk, 2001).",
"While content words comprise a specific semantic content and contribute to the principal meaning of a sentence, function words are rather non-conceptual' and mainly ful-fill some grammatical function (e.g., expressing modality or definiteness), gluing content words together.",
"Furthermore, content words are open-class because new members can freely be added.",
"In contrast, function words are closed-class, i.e., they are members of a fixed set.",
"Additionally, content words are flexible with respect to the syntactic phrase they attach to, e.g., the verb think can be complemented by an NP or a clause, while function words typically only combine with a specific syntactic phrase, e.g., a determiner with an NP.",
"Also, Figure 4: Exploring BERT's layer 10 allows us to draw insights about function and content words (Section 5).",
"in contrast to content words, function words are generally inseparable from their content word complements, i.e., they cannot be detached from their lexical heads, e.g., in in the house , the functional in cannot be separated from the content word house .",
"Despite these hard' criteria, the two categories are not rigid.",
"Function and content words form a quasi-continuum (squishiness'), a gradience between the two categories (Ross, 1972; Emonds, 1985).",
"This continuum is based on the fact that some words share properties of both categories.",
"Such words can be placed on a sliding scale of functionality.",
"For example, prepositions are less functional than articles, e.g., some prepositions are associated with a locative or directional meaning, but they are also more functional than nouns or verbs, e.g., because they are inseparable from their content words.",
"Within computational linguistics and especially NLP, this functionality continuum has not received much attention.",
"Prototypically functional words are mostly treated as stopwords and often removed from the analysis.",
"Nevertheless, a more linguistically-motivated look in this continuum can contribute to the explainability of LMs like BERT.",
"Utilizing the LMExplorer , we visualize a random subset of 800 unique sentences of the RTE-1 (Da-gan et al., 2005), RTE-2 (Bar-Haim et al., 2006) and RTE-3 (Giampiccolo et al., 2007) corpora.",
"These corpora contain sentence pairs originally intended for Natural Language Inference.",
"They stem from the news domain and thus contain variable content.",
"The pairs are split into single sentences and mapped to their POS tags based on the Stanford POS tagger (Toutanova et al., 2003).",
"We visualize the BERT-base embeddings and self-similarity of 496 unique words with a frequency greater than 5 and lower than 50, following Ethayarajh (2019).",
"The distribution plots show at-a-glance that each of the distributions roughly follows that of a normal distribution and that the mean self-similarity decreases across layers while the standard deviation increases, see Figure 2a.",
"This observation is in line with the finding by Ethayarajh (2019) that contextualized word representations are more context-specific in higher layers, i.e., the self-similarity decreases overall.",
"Moreover, we find specific spatial patterns in the interlinked-projection view , see Figure 1, i.e., specific groups of content words, e.g., named entities, and specific groups of function words, e.g., prepositions, seem to cluster together across the layers.",
"By filtering for different score ranges based on self-similarity via the distribution plots, we first investigate the three groups min, max and mid (one standard deviation around the mean standard deviation; grey area) in more detail.",
"In addition, we explore the self-similarity patterns in these areas in the matrix plots .",
"Score Areas Across the layers, mostly named entities, e.g., place names ( Israel, Korea, Haiti ), monosemous words ( rabies ), and polysemous words 1 , whose senses are closely related (e.g., research, currency, Marijuana ), occupy the max area across all layers, see, e.g., layer 10 in Figure",
"4. In the min area, highly polysemous words, e.g. field, 1 The distinction between polysemy and homonymy is controversial.",
"We take polysemous words to have multiple senses which exhibit some kind of semantic relation, e.g., home as a building/location vs. as a social institution.",
"Homonymous words comprise unrelated senses, e.g., bank as financial institution vs. as natural object (Utt and Pad o, 2011) often of different syntactic categories, e.g., present as a gift (noun) and as the verb to present .",
"We base our decisions on homonymy/polysemy on WordNet 3.1 (Fellbaum, 1998).",
"home , and homonymous words, e.g., set , occupy the space in the upper layers (e.g., layer 10, see Figure 4), and can also be found across the preceding layers.",
"Prepositions (e.g., of, for ) occur in the min range from the middle layers onwards.",
"Moreover, the determiner the occurs in the min range at layer 11 and generally shows a low self-similarity (see Figure 3).",
"In the mid range, we find temporal adverbials, e.g., today and now , modal verbs ( must, should ) as well as polysemous and monosemous words; see Figure",
"4. To shed light on these contextualization patterns, we explore the functionality continuum in more detail by looking at different groups of words across the layers.",
"Word-based Selection We discern the following groups of words for our further explorations: 1) articles, 2) prepositions, 3) quantifiers, 4) modal verbs, 5) temporal adverbials, 6) monosemous words, 7) polysemous words and 8) homonymous words.",
"Each group demonstrates a different pattern of self-similarity across layers, as shown in Figure",
"5. First, we observe that, before (almost) ending up in the min range, the determiners the and a start off in the mid range of the distribution with a decreasing self-similarity across the layers.",
"Prepositions such as of, in, on, for, at are found in the mid-min area until layer 6 but from then on, they are grouped under min.",
"Quantifiers like some, all, every remain in the mid range across all layers.",
"Modal verbs such as must, should, may follow an inconsistent pattern: while must and should start off in the upper ends of the mid area (max-mid) and end up in the mid range from layer 9 on, may is at first in the min area and after layer 5 in the mid range.",
"Temporal adverbials such as yesterday, never, now are also inconsistent.",
"Some of them (e.g., yesterday ) belong to the max group in the lower layers, but slowly move towards the mid area as the layers increase without ever entering the exact mid area.",
"Others (e.g., now, never ) are constantly within the mid range, starting at the higher end of mid and moving towards the middle.",
"Monosemous words like attorney, river, tsunami are mostly found in the max range, with a decreasing tendency across layers, but remain in the upper ends of the max area.",
"Polysemous words whose senses are very closely related, e.g., universe, statement , are also mostly found in the max area, while highly polysemous words whose senses are loosely related, e.g., field , are located in the min area in the lower layers and although their self-similarity increases, they remain in the min-mid area across layers.",
"Finally, homonymous words, e.g., set , are in the min area across layers.",
"These observations lead to new insights into how BERT captures contextualization, see Section",
"5. 5 Insights: The Functionality Continuum During our exploration, we came across patterns that fit to the theory of the functionality continuum and others that were contrary to our expectations.",
"Above all, we observed that contextualization is neither triggered merely by polysemy nor by variation in context.",
"To explain the observed patterns,",
"a) we positioned the defined categories within the functionality continuum 2 based on the inherent linguistic properties of the words and on insights from lexical semantics, and",
"b) we identified three criteria as potential triggers of contextualization, as shown in Table 1.",
"The first criterion refers to the sense variation ( Sense Var. ), i.e., whether a word has multiple senses (high variation), or only one or multiple but very closely related senses (low variation).",
"The second criterion captures syntactic context variation ( SynCtx. Var. ), i.e., whether a word needs to be part of a specific syntactic structure (low) or is flexible in terms of attachment and can be found in different kinds of syntactic structures (high).",
"Another potential trigger we identified is that of variation of semantic context ( SemCtx. Var. ).",
"This captures whether the contexts in which a word can occur are semantically similar (low) or different (high) to one another.",
"Based on these triggers and previous findings on contextualization by Ethayarajh (2019), we derive the expected contextualization ( Exp. Contextual. ) of each of the predefined categories.",
"We can then compare this to BERT's actual behavior ( BERT ) and shed light on BERT's abilities to capture the functionality continuum.",
"Note that here the expected contextualization coincides with the SemCtx.Var.",
"for the categories investi-2 See also semantic proximity continuum by Blank (1997).",
"gated, but might deviate for others.",
"Additionally, differences between the expected contextualization and the SemCtx.Var.",
"might currently be absorbed by our binary encoding (low/high).",
"We envision a more fine-grained Exp.",
"Contextual.",
"measure, accounting in detail for the relative positioning of words in the middle of the continuum.",
"Homonymy Homonymous words, being on the more content-like' end of the continuum, have a high sense variation due to their multiple (unre-lated) senses, a high syntactic variation (flexible attachment as content words) and a high semantic context variation as, due to their multiple senses, they can occur in semantically very different contexts.",
"This means that we expect a high contextualization, i.e., the embeddings of homonymous words are highly context-specific.",
"This is indeed confirmed with our findings since these words generally occur in the min area.",
"Polysemy Polysemous words, mostly with content-like' properties, exhibit a low/high sense variation, depending on whether they are highly polysemous, i.e., have loosely related senses, or not, i.e., have semantically related senses.",
"As it is typical of content words, polysemous words show high syntactic variation.",
"Concerning their semantic context variation, they are again in a grey' area depending on the degree of polysemy: highly polysemous words mostly appear in semantically different contexts, while plain polysemy is mostly found in semantically similar contexts since the senses are closely related.",
"With this, the expected contextualization is respective to the degree of the polysemy.",
"Indeed, BERT meets these expectations: highly polysemous words like field, home are in the min area across layers (high contextualization), while plain polysemous words are rather found in the max area (low contextualization).",
"Monosemy Monosemous words also seem to be correctly captured by BERT.",
"Such words have low sense variation, high syntactic variation (as content words) and low semantic context variation (due to their low sense variation).",
"According to this, they are also expected to have low contextualization.",
"We find this low contextualization in BERT as well, where monosemous words have max self-similarity across layers.",
"Temporal Adverbials At the middle of the functionality continuum, temporal adverbials have a low sense variation, e.g., yesterday has only one meaning, 3 as well as low syntactic variation.",
"On the other hand, their semantic context variation is high because they can occur in semantically very different contexts.",
"Thus, the expected contextualization is high, i.e., their embeddings should be context-specific to match the semantically different contexts they can appear in.",
"BERT fails to learn this: temporal adverbials are either found within the mid area across all layers or end up in this range in the upper layers, contrary to the expected min.",
"Modals & Quantifiers BERT also struggles in capturing the functionality continuum with modals and quantifiers.",
"These are comparable to words with high sense' variation: modals can not only have a deontic or an epistemic flavor, but also express variation through their variable quantificational force; similarly, quantifiers exhibit variation via their variable scope interpretation (wide or nar-row).",
"Both modals and quantifiers have low syntactic variation; they can only attach with specific syntactic phrases.",
"The contexts they appear in can be semantically very different and thus they have a high semantic context variation.",
"Based on this half-functional-half-content nature, modals and quantifiers are expected to have high contextualization, i.e., have context-specific embeddings based on the modal flavor they express, the quantificational force they capture, the scope resolution, etc.",
"However, we can see that BERT fails to meet this expectation.",
"3 It should be noted that such adverbials have one meaning, even if their extension is always a different one due to different reference points.",
"Modals and quantifiers mostly occur in the mid range instead of the expected min.",
"Prepositions At the functional end of the continuum, we find prepositions and articles.",
"Prepositions are comparable to words with a high sense' variation, capturing the fact that the same preposition can, for example, be locative or temporal, depending on the context.",
"Prepositions have low syntactic variation, as most functional words.",
"Still, their semantic context variation matches their multiple senses.' Therefore, we expect the preposition embeddings to be highly context-specific: this is indeed the case in BERT, where prepositions are mostly found in the min area.",
"Articles Last, we investigate articles and particularly the determiners the and a .",
"We take them to have no sense, 4 low syntactic variation and high semantic context variation the contexts they appear in do not have any semantic similarity in most cases.",
"Thus, we expect them to demonstrate high contextualization with highly context-specific embeddings.",
"BERT is able to model this through low self-similarity , which is more prominent for the than for a , nonetheless consistent for both.",
"Discussion Summing up, we see that BERT struggles to efficiently capture the functionality continuum.",
"While BERT manages to model the ends of the continuum, i.e., the mostly content and mostly functional words, it fails to create expressive embeddings for categories with content as well as functional properties.",
"This finding is in line with previous literature that has shown that current LMs cannot efficiently capture hard linguistic phenomena (e.g., Dasgupta et al. (2018); McCoy et al. (2019); Richardson et al. (2020)), with modals, quantifiers and temporal reasoning belonging to these phenomena.",
"Our work suggests that the BERT embeddings are not specific enough to capture the inherent functionality of certain word types, i.e., BERT does not learn the relevant generalizations.",
"Additionally, we show that contextualization is neither entirely driven by polysemy nor context variation.",
"Rather, contextualization can be explained via the harmonical combination of functionality, sense variation, syntactic variation and semantic context variation: BERT can efficiently model polysemy, homonymy and mononymy, i.e., it can efficiently capture words that appear in semantic contexts of high variation and low variation and 4 We treat determiners as definiteness markers, rather than as quantifiers or discourse markers, to be in-line with their treatment in popular NLP tasks such as NLI.",
"independently of their polysemy.",
"What it cannot model are words that have a semi-functional/semi-content nature (models, quantifiers, temporal ad-verbials), see Table 1.",
"Concerning models and quantifiers, BERT cannot learn the inherent functionality from the context alone and thus treats the words as simple monosemous words.",
"Concerning temporal adverbials, BERT cannot deal with the combination of low sense variation and high semantic context variation a rather unusual combination and is unable to conclude a single word meaning.",
"Although prepositions have the same triggers as modals and quantifiers, BERT follows our expectations with respect to contextualization.",
"This could be due to their higher syntactic flexibility or their close semantic relatedness with their content complements, but this needs to be explored as part of future work.",
"Overall, BERT seems to follow findings of psycholinguistics and language acquisition: children learn content words easier and earlier than function words (Bates et al., 1994; Caselli et al., 1995).",
"Drawing from language acquisition research, we see an opportunity for explainable methods to inspect BERT's inner-workings and improve its linguistic understanding, raising LMs from their infantile state to a more linguistically-mature one.",
"This paper presented new insights on the contextualization of the functionality continuum, showing that BERT fails to capture the nature of semi-functional-semi-content words.",
"These insights were generated through a novel visual analytics technique for contextualized word embedding exploration and analysis.",
"For a deeper understanding of the weaknesses of BERT, our technique can be extended with scores that model common linguistic properties of words and their nearest neighbors, e.g., WordNet semantic similarity or POS similarity scores.",
"Hence, they could serve as means of explanation and bring added value to the eXplainable Artificial Intelligence (XAI) research field.",
"More information about the project can be found under: https://embeddings-explained.lingvis.io .",
"We thank the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for funding within project BU 1806/10-2 Questions Visual-ized of the FOR2111 and project D02 Evalua-tion Metrics for Visual Analytics in Linguistics (Project ID: 251654672 TRR 161).",
"In the following, we describe the two main points with respect to the broader impact statement.",
"With regard to the broader impact of our work, we are going beyond just measuring scores by revealing and explaining the inner-workings of language models.",
"We put the measured scores in context through visual analytics, in combination with probing and adversarial testing methods, for the exploration, explanation, and analysis.",
"With our work, we aim to open new perspectives on measuring and obtaining the model performance, which go beyond typically used performance metrics.",
"With regard to reproducibility concerns, we would like to note that the contextualization scores calculated in this paper rely on the word frequencies and, thus, may differ depending on the analyzed corpus.",
"Future work should investigate the exact effect of word frequency and account for its impact."
] | [
"abstain",
"objective",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain"
] |
[
"To better tackle the named entity recognition (NER) problem on languages with little/no labeled data, cross-lingual NER must effectively leverage knowledge learned from source languages with rich labeled data.",
"Previous works on cross-lingual NER are mostly based on label projection with pairwise texts or direct model transfer.",
"However, such methods either are not applicable if the labeled data in the source languages is unavailable, or do not leverage information contained in unlabeled data in the target language.",
"In this paper, we propose a teacher-student learning method to address such limitations, where NER models in the source languages are used as teachers to train a student model on unlabeled data in the target language.",
"The proposed method works for both single-source and multi-source cross-lingual NER.",
"For the latter, we further propose a similarity measuring method to better weight the supervision from different teacher models.",
"Extensive experiments for 3 target languages on benchmark datasets well demonstrate that our method outperforms existing state-of-the-art methods for both single-source and multi-source cross-lingual NER.",
"Named entity recognition (NER) is the task of identifying text spans that belong to pre-defined categories, like locations, person names, etc .",
"It's a fundamental component in many downstream tasks, and has been greatly advanced by deep neural networks (Lample et al., 2016; Chiu and Nichols, 2016; Peters et al., 2017).",
"However, these approaches generally require massive manually labeled data, which prohibits their adaptation to low-resource languages due to high annotation costs.",
"One solution to tackle that is to transfer knowledge from a source language with rich labeled data to a target language with little or even no labeled Directly apply (i.e., = ) {,} {} Pairwise Relation {}",
"data, which is referred to as cross-lingual NER (Wu and Dredze, 2019; Wu et al., 2020).",
"In this paper, following Wu and Dredze (2019) and Wu et al. (2020), we focus on the extreme scenario of cross-lingual NER where no labeled data is available in the target language, which is challenging in itself and has attracted considerable attention from the research community in recent years.",
"Previous works on cross-lingual NER are mostly based on label projection with pairwise texts or direct model transfer.",
"Label-projection based methods focus on using labeled data in a source language to generate pseudo-labelled data in the target language for training an NER model.",
"For example, Ni et al. (2017) creates automatically labeled NER data for the target language via label projection on comparable corpora and develops a heuristic scheme to select good-quality projection-labeled data.",
"Mayhew et al. (2017) and Xie et al. (2018) translate the source language labeled data at the phrase/word level to generate pairwise labeled data for the target language.",
"Differently, model-transfer based methods (Wu and Dredze, 2019; Wu et al., 2020) focus on training a shared NER model on the labeled data in the source language with language-independent features, such as cross-lingual word representations (Devlin et al., 2019), and then directly testing the model on the target language.",
"However, there are limitations in both label-projection based methods and model-transfer based methods.",
"The former relies on labeled data in the source language for label projection, and thus is not applicable in cases where the required labeled data is inaccessible ( e.g. , due to privacy/sensitivity issues).",
"Meanwhile, the later does not leverage unlabeled data in the target language, which can be much cheaper to obtain and probably contains very useful language information.",
"In this paper, we propose a teacher-student learning method for cross-lingual NER to address the mentioned limitations.",
"Specifically, we leverage multilingual BERT (Devlin et al., 2019) as the base model to produce language-independent features.",
"A previously trained NER model for the source language is then used as a teacher model to predict the probability distribution of entity labels ( i.e. , soft labels) for each token in the non-pairwise unlabeled data in the target language.",
"Finally, we train a student NER model for the target language using the pseudo-labeled data with such soft labels.",
"The proposed method does not rely on labelled data in the source language, and it also leverages the available information from unlabeled data in the target language, thus avoiding the mentioned limitations of previous works.",
"Note that we use the teacher model to predict soft labels rather than hard labels ( i.e. , one-hot labelling vector), as soft labels can provide much more information (Hinton et al., 2015) for the student model.",
"Figure 1 shows the differences between the proposed teacher-student learning method and the typical label-projection or model-transfer based methods.",
"We further extend our teacher-student learning method to multi-source cross-lingual NER, considering that there are usually multiple source languages available in practice and we would prefer transferring knowledge from all source languages rather than a single one.",
"In this case, our method still enjoys the same advantages in terms of data availability and inference efficiency, compared with existing works (Tackstrom, 2012; Chen et al., 2019; Enghoff et al., 2018; Rahimi et al., 2019).",
"Moreover, we propose a method to measure the similarity between each source language and the target language, and use this similarity to better weight the supervision from the corresponding teacher model.",
"We evaluate our proposed method for 3 target languages on benchmark datasets, using different source language settings.",
"Experimental results show that our method outperforms existing state-of-the-art methods for both single-source and multi-source cross-lingual NER.",
"We also conduct case studies and statistical analyses to discuss why teacher-student learning reaches better results.",
"The main contributions of this work are: We propose a teacher-student learning method for single-source cross-lingual NER, which addresses limitations of previous works w.r.t data availability and usage of unlabeled data.",
"We extend the proposed method to multi-source cross-lingual NER, using a measure of the similarities between source/target languages to better weight teacher models.",
"We conduct extensive experiments validating the effectiveness and reasonableness of the proposed methods, and further analyse why they attain superior performance.",
"Single-Source Cross-Lingual NER: Such approaches consider one single source language for knowledge transfer.",
"Previous works can be divided into two categories: label-projection and model-transfer based methods.",
"Label-projection based methods aim to build pseudo-labeled data for the target language to train an NER model.",
"Some early works proposed to use bilingual parallel corpora and project model expectations (Wang and Manning, 2014) or labels (Ni et al., 2017) from the source language to the target language with external word alignment information.",
"But obtaining parallel corpora is expensive or even infeasible.",
"To tackle that, recent methods proposed to firstly translate source-language labeled data at the phrase level (Mayhew et al., 2017) or word level (Xie et al., 2018), and then directly copy labels across languages.",
"But translation introduces extra noise due to sense ambiguity and word order differences between languages, thus hurting the trained model.",
"Model-transfer based methods generally rely on language-independent features ( e.g. , cross-lingual word embeddings (Ni et al., 2017; Huang et al., 2019; Wu and Dredze, 2019; Moon et al., 2019), word clusters (Tackstrom et al., 2012), gazetteers (Zirikly and Hagiwara, 2015), and wik-ifier features (Tsai et al., 2016)), so that a model trained with such features can be directly applied to the target language.",
"For further improvement, Wu et al. (2020) proposed constructing a pseudo-training set for each test case and fine-tuning the model before inference.",
"However, these methods do not leverage any unlabeled data in the target language, though such data can be easy to obtain and benefit the language/domain adaptation.",
"Multi-Source Cross-Lingual NER: Multi-source cross-lingual NER considers multiple source languages for knowledge transfer.",
"T ackstr om (2012) and Moon et al. (2019) concatenated the labeled data of all source languages to train a unified model, and performed cross-lingual NER in a direct model transfer manner.",
"Chen et al. (2019) leveraged adversarial networks to learn language-independent features, and learns a mixture-of-experts model (Shazeer et al., 2017) to weight source models at the token level.",
"However, both methods straightly rely on the availability of labeled data in the source languages.",
"Differently, Enghoff et al. (2018) implemented multi-source label projection and studied how source data quality influence performance.",
"Rahimi et al. (2019) applied truth inference to model the transfer annotation bias from multiple source-language models.",
"However, both methods make predictions via an ensemble of source-language models, which is cumbersome and computationally expensive, especially when a source-language model has massive parameter space.",
"Teacher-Student Learning: Early applications of teacher-student learning targeted model compression (Bucilu et al., 2006), where a small student model is trained to mimic a pre-trained, larger teacher model or ensemble of models.",
"It was soon applied to various tasks like image classification (Hinton et al., 2015; You et al., 2017), dialogue generation (Peng et al., 2019), and neural machine translation (Tan et al., 2019), which demonstrated the usefulness of the knowledge transfer approach.",
"In this paper, we investigate teacher-student learning for the task of cross-lingual NER, in both single-source and multi-source scenarios.",
"Different from previous works, our proposed method does not rely on the availability of labelled data in source languages or any pairwise texts, while it can also leverage extra information in unlabeled data in the target language to enhance the cross-lingual transfer.",
"Moreover, compared with using an ensemble of source-language models, our method uses a single student model for inference, which can enjoy higher efficiency.",
"Named entity recognition can be formulated as a sequence labeling problem, i.e. , given a sentence x = { x i } Li =1 with L tokens, an NER model is supposed to infer the entity label y i for each token x i and output a label sequence y = { y i } Li =1 .",
"Under the paradigm of cross-lingual NER, we assume there are K source-language models previously trained with language-independent features.",
"Our proposed teacher-student learning method then uses those K source-language models as teachers to train an effective student NER model for the target language on its unlabeled data D tgt .",
"Here we firstly consider the case of only one source language ( K = 1 ) for cross-lingual NER.",
"The overall framework of the proposed teacher-student learning method for single-source cross-lingual NER is illustrated in Figure",
"2. 3.1.1 NER Model Structure As shown in Figure 2, for simplicity, we employ the same neural network structure for both teacher (source-language) and student (target-language) NER models.",
"Note that the student model is flexi-ble and its structure can be determined according to the trade-off between performance and train-ing/inference efficiency.",
"Here the adopted NER model consists of an encoder layer and a linear classification layer.",
"Specifi-cally, given an input sequence x = { x i } Li =1 with L tokens, the encoder layer f maps it into a sequence of hidden vectors h = { h i } Li =1 : h = f ( x ) (1) Here f ( ) can be any encoder model that produces cross-lingual token representations, and h i is the hidden vector corresponding to the i -th token x i .",
"With each h i derived, the linear classification layer computes the probability distribution of entity labels for the corresponding token x i , using a softmax function: p ( x i , ) = softmax ( W h i + b ) (2) where p ( x i , ) R | C | with C being the entity label set, and = { f , W, b } denotes the to-be-learned model parameters.",
"Training: We train the student model to mimic the output probability distribution of entity labels by the teacher model, on the unlabeled data in the target language D tgt .",
"Knowledge from the teacher model is expected to transfer to the student model, while the student model can also leverage helpful language-specific information available in the unlabeled target-language data.",
"Given an unlabeled sentence x (cid:48) D tgt in the target language, the teacher-student learning loss w.r.t x (cid:48) is formulated as the mean squared error (MSE) between the output probability distributions of entity labels by the student model and those by the teacher model, averaged over tokens.",
"Note that here we follow Yang et al. (2019) and use the MSE loss, because it is symmetric and mimics all probabilities equally.",
"Suppose that for the i -token in x (cid:48) , i.e. , x (cid:48) i , the probability distribution of entity labels output by the student model is denoted as p ( x (cid:48) i , S ) , and that output by the teacher model as p ( x (cid:48) i , T ) .",
"Here S and T , respectively, denote Gradient Back-Propagation Inference Training Unlabeled Target-Language Data Student Loss Function Teacher () . . . Teacher (1) 1 Figure 3: Framework of the proposed teacher-student learning method for multi-source cross-lingual NER.",
"the parameters of the student and the teacher models.",
"The teacher-student learning loss w.r.t x (cid:48) is then defined as: L ( x (cid:48) , S ) = 1 LL (cid:88) i =1 MSE (cid:0) p ( x (cid:48) i , S ) , p ( x (cid:48) i , T ) (cid:1) (3) And the whole training loss is the summation of losses w.r.t all sentences in D tgt , as defined below.",
"L ( S ) = (cid:88) x (cid:48) D tgt L ( x (cid:48) , S ) (4) Minimizing L ( S ) will derive the student model.",
"Inference: For inference in the target language, we only utilize the learned student model to predict the probability distribution of entity labels for each token x i in a test sentence x .",
"Then we take the entity label c C with the highest probability as the predicted label y i for x i : y i = arg max c p ( x i , S ) c (5) where p ( x i , S ) c denotes the predicted probability corresponding to the entity label c in p ( x i , S ) .",
"The framework of the proposed teacher-student learning method for multi-source ( K > 1 ) cross-lingual NER is illustrated in Figure",
"3. 3.2.1 Extension to Multiple Teacher Models As illustrated in Figure 3, we extend the single-teacher framework in Figure 2 into a multi-teacher one, while keeping the student model unchanged.",
"Note that, for simplicity, all teacher models and the student model use the same model structure as 3.1.1.",
"Take the k -th teacher model for example, and denote its parameters as ( k ) T .",
"Given a sentence x (cid:48) = { x (cid:48) i } Li =1 with L tokens from the unlabeled data D tgt in the target language, the output probability distribution of entity labels w.r.t the i -th token x i can be derived as Eq.",
"1 and 2, which is denoted as p ( x (cid:48) i , ( k ) T ) .",
"To combine all teacher models, we add up their output probability distributions with a group of weights { k } Kk =1 as follows.",
"where p ( x (cid:48) i , T ) is the combined probability distribution of entity labels, T = { ( k ) T } Kk =1 is the set of parameters of all teacher models, and k is the weight corresponding to the k -th teacher model, with (cid:80) Kk =1 k = 1 and k 0 , k { 1 , . . . , K } .",
"Here we elaborate on how to derive the weights { k } Kk =1 in cases w/ or w/o unlabeled data in the source languages.",
"Source languages more similar to the target language should generally be assigned higher weights to transfer more knowledge.",
"With Unlabeled Source-Language Data: As no labeled data is available, existing supervised lan-guage/domain similarity learning methods for a target task ( i.e. , NER) (McClosky et al., 2010) are not applicable here.",
"Inspired by Pinheiro (2018), we propose to introduce a language identification auxiliary task for calculating similarities between source and target languages, and then weight teacher models based on this metric.",
"In the language identification task, for the k th source language, each unlabeled sentence u ( k ) in it is associated with the language index k to build its training dataset, denoted as D ( k ) src = { ( u ( k ) , k ) } .",
"We also assume that in the m dimensional language-independent feature space, sentences from each source language should be clustered around the corresponding language embedding vector.",
"We thus introduce a learnable language embedding vector ( k ) R m for the k -th source language, and then utilize a bilinear operator to measure similarity between a given sentence u and the k -th source language: s ( u , ( k ) ) = g T ( u ) M ( k ) (8) where g ( ) can be any language-independent model that outputs sentence embeddings, and M R m m denotes the parameters of the bilinear operator.",
"By building a language embedding matrix P R m K with each ( k ) column by column , and applying a softmax function over the bilinear operator, we can derive language-specific probability distributions w.r.t u as below.",
"Then the parameters M and P are trained to identify the language of each sentence in { D ( k ) src } Kk =1 via minimizing the cross-entropy (CE) loss:",
"where D src is the union set of { D ( k ) src } Kk =1 , Z = | D src | , (cid:107) (cid:107) 2 F denotes the squared Frobenius norm, and I is an identity matrix.",
"The regularizer in L ( P, M ) is to encourage different dimensions of the language embedding vectors to focus on different aspects, with 0 being its weighting factor.",
"With learned M and P = [ (1) , (2) , . . . , ( K ) ] , we compute the weights { k } Ki =1 using the unlabeled data in the target language D tgt : k = 1 | D tgt | (cid:88) x (cid:48) D tgt exp (cid:0) s ( x (cid:48) , ( k ) ) / (cid:1) (cid:80) Ki =1 exp (cid:0) s ( x (cid:48) , ( i ) ) / (cid:1) (11) where is a temperature factor to smooth the output probability distribution.",
"In our experiments, we set it as the variance of all values in { s ( x (cid:48) , ( k ) ) } , x (cid:48) D tgt , k { 1 , ..., K } , so that k would not be too biased to either 0 or 1 .",
"Training: With the combined probability distribution of entity labels from multiple teacher models, i.e. , p ( x (cid:48) i , T ) in Eq.",
"6, the training loss for the student model is identical to Eq.",
"3 and",
"4. Inference: For inference on the target language, we only use the learned student model and make predictions as in the single-source scenario (Eq. 5).",
"We conduct extensive experiments for 3 target languages ( i.e. , Spanish, Dutch, and German) on standard benchmark datasets, to validate the effectiveness and reasonableness of our proposed method for singleand multi-source cross lingual NER.",
"Datasets We use two NER benchmark datasets: CoNLL-2002 (Spanish and Dutch) (Tjong Kim Sang, 2002); CoNLL-2003 (English and German) (Tjong Kim Sang and De Meulder, 2003).",
"Both are annotated with 4 entity types: PER , LOC , ORG , and MISC .",
"Each language-specific dataset is split into training, development, and test sets.",
"Table 1 reports the dataset statistics.",
"All sentences are tokenized into sequences of subwords with WordPiece (Wu et al., 2016).",
"Following Wu and Dredze (2019), we also use the BIO entity labelling scheme.",
"In our experiments, for each source language, an NER model is trained previously with its corresponding labeled training set.",
"As for the target language, we discard the entity labels from its training set, and use it as unlabeled target-language data D tgt .",
"Similarly, unlabeled source-language data for learning language similarities (Eq. 10) is simulated via discarding the entity labels of each training set.",
"Network Configurations We leverage the cased multilingual BERTBASE (Wu and Dredze, 2019) for both f ( ) in Eq.",
"1 and g ( ) in Eq.",
"8, with 12 Transformer blocks, 768 hidden units, 12 self-attention head, GELU activations (Hendrycks and Gimpel, 2016), and learned positional embeddings.",
"We use the final hidden vector of the first [ CLS ] token as the sentence embedding for g ( ) , and use the mean value of sentence embeddings w.r.t the k -th source language to initialize ( k ) in Eq.",
"8.",
"Network Training We implement our proposed method based on huggingface Transformers 1 .",
"Following Wolf et al. (2019), we use a batch size of 32, and 3 training epochs to ensure convergence of optimization.",
"Following Wu and Dredze (2019), we freeze the parameters of the embedding layer and the bottom three layers of BERTBASE .",
"For the optimizers, we use AdamW (Loshchilov and Hutter, 2017) with learning rate of 5 e 5 for teacher models (Wolf et al., 2019), and 1 e 4 for the student model (Yang et al., 2019) to converge faster.",
"As for language similarity measuring ( i.e. , Eq. 10), we set = 0 .",
"01 following Pinheiro (2018).",
"Besides, we use a low-rank approximation for the bilinear operator M , i.e. , M = UTV where U, V R d m with d (cid:28) m , and we empirically set d = 64 .",
"Performance Metric We use phrase level F1-score as the evaluation metric, following Tjong Kim Sang (2002).",
"For each experiment, we conduct 5 runs and report the average F1-score.",
"Single-Source Cross-Lingual NER Table 2 reports the results of different single-source cross-lingual NER methods.",
"All results are obtained with English as the source language and others as target languages.",
"It can be seen that our proposed method outperforms the previous state-of-the-art methods.",
"Particularly, compared with the remarkable Wu and Dredze (2019) and Moon et al. (2019), which use nearly the same NER model as our method but is based on direct model transfer, our method obtains significant and consistent improvements in 1 https://github.com/huggingface/transformers es nl de Tackstrom (2012) 61.90 59.90 36.40 Rahimi et al. (2019) 71.80 67.60 59.10 Chen et al. (2019) 73.50 72.40 56.00 Moon et al. (2019) 76.53 83.35 72.44 Ours-avg 77.75 80.70 74.97 Ours-sim 78.00 81.33 75.33 Table 3: Performance comparisons of multi-source cross-lingual NER.",
"F1-scores, ranging from 0.51 for Dutch to 1.80 for German.",
"That well demonstrates the benefits of teacher-student learning over unlabeled target-language data, compared to direct model transfer.",
"Moreover, compared with the latest meta-learning based method (Wu et al., 2020), our method requires much lower computational costs for both training and inference, meanwhile reaching superior performance.",
"Multi-Source Cross-Lingual NER Here we select source languages in a leave-one-out manner, i.e. , all languages except the target one are regarded as source languages.",
"For fair comparisons, we take Spanish, Dutch, and German as target languages, respectively.",
"Table 3 reports the results of different methods for multi-source cross-lingual NER.",
"Both our teacher-student learning methods, i.e. , Ours-avg (averaging teacher models, Eq. 7) and Ours-sim (weighting teacher models with learned language similarities, Eq. 11), outperform previous state-of-the-art methods on Spanish and German by a large margin, which well demonstrates their effectiveness.",
"We attribute the large performance gain to the teacher-student learning process to further leverage helpful information from unlabeled data in the target language.",
"Though Moon et al. (2019) achieves superior performance on Dutch, it is not applicable in cases where the labeled source-language data is inaccessible, and thus it still suffers from the aforementioned limitation w.r.t .",
"data availability.",
"Moreover, compared with Ours-avg , Ours-sim brings consistent performance improvements.",
"That means, if unlabeled data in source languages is available, using our proposed language similarity measuring method for weighting different teacher es nl de Single-source: Ours 76.94 80.89 73.22 HL 76.60 (-0.34) 80.43 (-0.46) 72.98 (-0.24) MT 75.60 (-1.34) 79.99 (-0.90) 71.76 (-1.46) Multi-source: Ours-avg 77.75 80.70 74.97 HL-avg 77.65 (-0.10) 80.39 (-0.31) 74.31 (-0.66) MT-avg 77.25 (-0.50) 80.53 (-0.17) 74.18 (-0.79) Ours-sim 78.00 81.33 75.33 HL-sim 77.81 (-0.19) 80.27 (-1.06) 74.63 (-0.70) MT-sim 77.12 (-0.88) 80.24 (-1.09) 74.33 (-1.00) Table 4: Ablation study of the proposed teacher-student learning method for cross-lingual NER.",
"models can be superior to simply averaging them.",
"Analyses on Teacher-Student Learning To validate the reasonableness of our proposed teacher-student learning method for cross-lingual NER, we introduce the following baselines.",
"1) Hard Label (HL) , which rounds the probability distribution of entity labels ( i.e. , soft labels output by teacher models) into a one-hot labelling vector ( i.e. , hard labels) to guide the learning of the student model.",
"Note that in multi-source cases, we use the combined probability distribution of multiple teacher models (Eq. 6) to derive the hard labels.",
"To be consistent with Eq.",
"3, we still adopt the MSE loss here.",
"In fact, both MSE loss and cross-entropy loss lead to the same observation described in this subsection.",
"2) Direct Model Transfer (MT) , where NO unlabeled target-language data is available to perform teacher-student learning, and thus it degenerates into:",
"a) directly applying the source-language model in single-source cases, or",
"b) directly applying a weighted ensemble of source-language models in multi-source cases, with weights derived via Eq.",
"6 and Eq.",
"11.",
"Table 4 reports the ablation study results.",
"It can be seen that using hard labels ( i.e. , HL-*) would result in consistent performance drops in all cross-lingual NER settings, which validates using soft labels in our proposed teacher-student learning method can convey more information for knowledge transfer than hard labels.",
"Moreover, we can also observe that, using direct model transfer ( i.e. , #1 Spanish Source-Language Model: ...Etchart [I-PER, 1.00] Sydney [B-LOC, 0.98] ( Australia [B-LOC, 1.00] ) , 23 may ( EFE [O, 0.53] ) .",
"MT-*) would lead to even more significant performance drops in all cross-lingual NER settings (up to 1.46 F1-score).",
"Both demonstrate that leveraging unlabeled data in the target language can be helpful, and that the proposed teacher-student learning method is capable of leveraging such information effectively for cross-lingual NER.",
"Analyses on Language Similarity Measuring We further compare the proposed language similarity measuring method with other commonly used unsupervised metrics, i.e. , cosine similarity and (cid:96) 2 distance.",
"Specifically, s ( x (cid:48) , ( k ) ) in Eq.",
"11 is replaced by cosine similarity or negative (cid:96) 2 distance between x (cid:48) and the mean value of sentence embeddings w.r.t the k -th source language.",
"As shown in Table 5, replacing the proposed language similarity measuring method with either cosine / (cid:96) 2 metrics leads to consistent performance drops across all target languages.",
"This further demonstrates the benefits of our language identification based similarity measuring method.",
"By analyzing which failed cases of directly applying the source-language model are corrected by the proposed teacher-student learning method, we try to bring up insights on why teacher-student learning works, in the case of single-source cross-lingual NER.",
"probability of the prediction Figure 5: Percentage of corrected mispredictions, in different probability intervals.",
"Firstly, teacher-student learning can probably help to learn label preferences for some specific words in the target language.",
"Specifically, if a word appears in the unlabeled target-language data and the teacher model consistently predicts it to be associated with an identical label with high probabilities, the student model would learn the preferred label w.r.t that word, and predict it in cases where the sentence context may not provide enough information.",
"Such label preference can help the predictions for tokens that are less ambiguous and generally associated with an identical entity label.",
"As illustrated in Figure 4, in example #1, the source-language (teacher) model, fails to identify EFE as an ORG in the test sentences, while the student model ( i.e. , Ours) can correctly label it, because it has seen EFE labeled as ORG by the teacher model with high probabilities in the unlabeled target-language data D tgt .",
"Similar results can also be observed in example #2 and #3.",
"Moreover, teacher-student learning may help to find a better classifying hyperplane for the student NER model with unlabelled target-language data.",
"Actually, we notice that the source-language model generally makes correct label predictions with higher probabilities, and makes mispredictions with relatively lower probabilities.",
"By calculating the proportion of its mispredictions that are corrected by our teacher-student learning method in different probability intervals, we find that our method tends to correct the low-confidence mispredictions, as illustrated in Figure 5.",
"We conjecture that, with the help of unlabeled target-language data, our method can probably find a better classifying hyperplane for the student model, so that the low-confidence mispredictions, which are closer to the classifying hyperplane of the source-language model, can be clarified.",
"In this paper, we propose a teacher-student learning method for single-/multi-source cross-lingual NER, via using source-language models as teachers to train a student model on unlabeled data in the target language.",
"The proposed method does not rely on labelled data in the source languages and is capable of leveraging extra information in the unlabelled target-language data, which addresses the limitations of previous label-projection based and model-transfer based methods.",
"We also propose a language similarity measuring method based on language identification, to better weight different teacher models.",
"Extensive experiments on benchmark datasets show that our method outperforms the existing state-of-the-art approaches."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"objective",
"result",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"abstain",
"objective",
"result"
] |
[
"Deep learning models have achieved great success on the task of Natural Language Inference (NLI), though only a few attempts try to explain their behaviors.",
"Existing explanation methods usually pick prominent features such as words or phrases from the input text.",
"However, for NLI, alignments among words or phrases are more enlightening clues to explain the model.",
"To this end, this paper presents AREC , a post-hoc approach to generate alignment rationale explanations for co-attention based models in NLI.",
"The explanation is based on feature selection , which keeps few but sufficient alignments while maintaining the same prediction of the target model.",
"Experimental results show that our method is more faithful and readable compared with many existing approaches.",
"We further study and reevaluate three typical models through our explanation beyond accuracy, and propose a simple method that greatly improves the model robustness.",
"1 1 Introduction Natural Language Inference (NLI) is a fundamental task in Natural Language Processing (NLP) which is to determine if a hypothesis entails a premise.",
"Recently, with the introduction of large-scale annotated datasets (Bowman et al., 2015; Williams et al., 2018), deep learning models are adopted to solve the task in a supervised manner (Conneau et al., 2017; Chen et al., 2017; Devlin et al., 2019) and achieve great success, while inner mechanisms of these methods are still opaque due to high computational complexities.",
"assigns saliency scores for input features (Bah-danau et al., 2015; Lundberg and Lee, 2017; Thorne et al., 2019; Kim et al., 2020), and feature selection or rationale that keeps a subset of features sufficient for the prediction (Lei et al., 2016; Bastings et al., 2019; De Cao et al., 2020; DeYoung et al., 2020).",
"Figure 1",
"(a) and",
"(b) present a text attribution explanation by LIME (Ribeiro et al., 2016) and a text rationale explanation from Li et al. (2016) of an NLI sentence pair.",
"Both explanations provide insights of which input words are responsible for the prediction.",
"However, NLI is a cross-sentence task requiring a system to reason over alignments 2 (MacCartney and Manning, 2009).",
"Intuitively, it is more sensible to explain NLI systems in the way of 2 In machine translation, alignments refer to bilingual text pairs with identical meanings.",
"But for NLI, the semantics of two sentences may be different, it is more suitable to define alignments as any text pairs related lexically or logically, etc. alignments instead of isolated words/phrases.",
"For the example in Figure 1, the contradicted phrase pair street store is one of the key alignments responsible for the correct prediction.",
"To explain NLI models over alignments, the literature usually looks at co-attention weights (Parikh et al., 2016; Pang et al., 2016; Chen et al., 2017), which is a dominant way to implicitly align word pairs (Wang et al., 2017; Gong et al., 2018; Devlin et al., 2019).",
"However, attention is argued not as explainable as expected (Jain and Wallace, 2019; Serrano and Smith, 2019; Bastings and Filip-pova, 2020).",
"Moreover, co-attention assigns scores among words thus forbids us to observe phrase-level alignments, which is a flaw that generally exists for attribution explanations as shown in Figure 1",
"(c).",
"Other works build hard alignments resorting sparse attention (Yu et al., 2019; Bastings et al., 2019; Swanson et al., 2020).",
"But their self-explanatory architectures pay for the interpretability at a cost of performance dropping on accuracy (Molnar, 2020).",
"Meanwhile, these techniques are unable to analyze well-trained models.",
"To resolve above problems, this paper proposes AREC , a post-hoc local approach to generate A lignment R ationale E xplanation for C o-attention based models.",
"Analogous with Lei et al. (2016), our alignment rationale is a set that contains text pairs from the NLI sentence pair with two requirements.",
"First, the explanation is supposed to be faithful to the predictive model, where selected text pairs must alone suffice for the original prediction.",
"Second, the explanation should be human-friendly or readable (Miller, 2019), which means the pairs are few to promote compact rationales, and extracted continuously to make phrase-level rationales as far as possible (Lei et al., 2016; Bastings et al., 2019).",
"Figure 1",
"(d) presents an example of AREC explanation.",
"It shows that the model reaches the right prediction reasonably: it identi-fies People Passengers , walk through car driving and store street to make up the alignment rationale.",
"AREC is flexible to apply on any co-attention architectures, allowing us for deep investigations of well-trained models.",
"With the proposed AREC , we study three typical co-attention based models Decomposable Attention (DA) (Parikh et al., 2016), Enhanced LSTM (ESIM) (Chen et al., 2017) and BERT (Devlin et al., 2019) on four benchmarks including SNLI (Bow-man et al., 2015), ESNLI (Camburu et al., 2018), BNLI (Glockner et al., 2018) and HANS (McCoy et al., 2019).",
"Experimental results show that our method could generate more faithful and readable explanations.",
"Moreover, we employ our proposed AREC to analyze these models deeply from the aspect of alignments.",
"Based on our explanations, we further present a simple improvement strategy that greatly increases robustness of different models without modifying their architectures or retraining.",
"This proves that our method could factually reflect how models work.",
"Our contributions are summarized as follows: 1) We come up with AREC , a post-hoc local explanation method to extract the alignment rationale for co-attention based models.",
"We compare AREC with other explanation methods, illustrating its advantages on faithfulness and readability.",
"2) We diagnose three typical co-attention based models using AREC by re-evaluating them in a more fine-grained alignment level beyond accuracy.",
"Experimental results could reveal potential improvement solutions.",
"To the best of our knowledge, we are the first to study existing models with alignment exhaustively.",
"Natural Language Inference has been studied for years.",
"Despite lots of works construct representations for the input two sentences individually (Bow-man et al., 2015; Mueller and Thyagarajan, 2016; Conneau et al., 2017), the task actually requires a system to recognize alignments (MacCartney and Manning, 2009).",
"In early days, alignment detection is sometimes formed as an independent task (Chambers et al., 2007; MacCartney et al., 2008), or a component of a pipeline system (MacCartney et al., 2006).",
"Currently deep learning methods seek to model alignments implicitly through co-attention mechanism (Parikh et al., 2016; Pang et al., 2016; Chen et al., 2017; Wang et al., 2017; Gong et al., 2018; Joshi et al., 2019; Devlin et al., 2019).",
"The technique is first proposed in machine translation (Bahdanau et al., 2015), and soon dominates in many applications including NLI.",
"However why models with co-attention layers are effective is still called for answers.",
"Explaining model behaviors has attracted much interests.",
"Existing studies include opening the component of models (Murdoch et al., 2018), assigning word importance scores (Ribeiro et al., 2016; Li et al., 2016; Kim et al., 2020), extracting predictive related input pieces, referred as sufficient input subset (Carter et al., 2019) or rationale (Lei et al., 2016; Bastings et al., 2019), building hierarchical explanations (Chen et al., 2020; Zhang et al., 2020), and generating natural language explanations (Camburu et al., 2018; Kumar and Talukdar, 2020).",
"However, they usually explain the model on the granularity of words/phrases.",
"Such ways are sufficient for text classification but not suitable for NLI, since atom features in the task are alignments.",
"Co-attention itself is often viewed as an explanation.",
"Indeed, co-attention is a key proxy to model alignments, where perturbing its weights has a significant impact (Vashishth et al., 2019).",
"Yet recently, attention is argued to be not explainable as expected (Jain and Wallace, 2019; Serrano and Smith, 2019; Grimsley et al., 2020; Bastings and Filippova, 2020).",
"Secondly, co-attention along with feature attribution explanations just assigns scores among words, which is infeasible to observe phrase-level alignments.",
"Furthermore, for models with multiple attentions (Vaswani et al., 2017), it's hard to acquire a global understanding of alignments.",
"Other approaches include Yu et al. (2019), who adopts generator-encoder architecture (Lei et al., 2016) to generate corresponded rationales.",
"But their approach is unable to extract more fine-grained alignments (e.g., one-to-one continuous alignments).",
"Bastings et al. (2019); Swanson et al. (2020) design sparse attention for hard alignments.",
"However, these methods trade performance for interpretability, and are immutable to analyze well-trained models.",
"In this section, we describe our AREC in details.",
"As mentioned before, AREC is a post-hoc approach for explaining co-attention based models.",
"Thus we first introduce the co-attention layer, then depict the propose AREC .",
"In our notation, we have an instance including a premise P = [ p 1 , , p | p | ] R d | p | and a hypothesis H = [ h 1 , , h | h | ] R d | h | , where | p | / | h | is the length of the premise/hypothesis, and p i / h j R d denotes corresponding word embedding",
"embedding (fixed or contextual).",
"Co-attention layer accepts P and H as input and outputs alignment enhanced word representations P R d | p | and H R d | h | .",
"At the first step, we compute a similarity matrix S R | p || h | S i,j = ( p i , h j ) (1) where is a similarity function, ordinarily a vector dot product (Chen et al., 2017).",
"Then S is normalized to compute soft alignment scores for every word in a sentence w.r.t all the words in its partner AP i, : = softmax( S i, : ) AH : ,j = softmax( S : ,j ) (2) Here AP and AH are so-called co-attention matrices, each element inside indicates the matching degree of the corresponding word pair.",
"Next, we obtain soft alignments features for every word in the premise/hypothesis by averaging word embeddings in the hypothesis/premise weighted by the soft alignment scores P = H APT H = P AH (3) Now P / H is a richer representation of P / H enhanced by H / P and fed to following modules, such as a classifier which outputs probabilities of candidate categories, i.e., entailment , contradiction and neutral in NLI task.",
"The proposed AREC relies on feature selection , keeping few but sufficient alignments while maintaining the original prediction.",
"Thus to restrict the model to only consider some specific alignments, we intuitively mask co-attention matrices AP and AH following Serrano and Smith (2019); Pruthi et al. (2020).",
"Let Z { 0 , 1 } | p || h | be a binary mask indicating the presence or absence of every word pair alignment, and M be a model with co-attention layers.",
"Then the masking process is simply Hadamard product between mask Z and co-attention matrices AP and AH .",
"An alignment rationale is obtained by an optimistic problem Z = arg min Z 0 L 0 + 1 L 1 + 2 L 2 (4) The loss contains three terms ( L 0 , L 1 and L 2 ) to satisfy faithfulness and readability as mentioned in Section",
"1. 0 , 1 and 2 are hyper-parameters standing for loss weights.",
"Every rectangular region in Z represents a text alignment in the alignment rationale.",
"We now describe loss terms.",
"The first term L 0 is about fidelity , asking that the model prediction is maintained after masking (Molnar, 2020).",
"Fidelity ensures faithfulness, making the derived explanation depict the true profile of how the model works.",
"We choose the euclidean distance between logits as this loss term, i.e., L 0 := (cid:107) M l ( P , H ) MZ l ( P , H ) (cid:107) 2 (5) where M l ( P , H ) and MZ l ( P , H ) R 3 are original output logits and output logits when applying the mask Z respectively.",
"Compared to commonly used KL divergence (De Cao et al., 2020) or label equality (Feng et al., 2018), the euclidean distance between logits is a stricter constraint that narrows down the solution space and would lead to more faithful explanations 3 .",
"Secondly, an explanation ought to be readable (Molnar, 2020).",
"That requirement contains compactness and contiguity under the context of alignment explanation.",
"Compactness draws intuition from the philosophy that a good explanation should be short or selective (Miller, 2019), which encourages fewer alignments to be selected.",
"Compactness loss is simply the L1 norm of the mask Z L 1 := | Z | 1 = (cid:88) i,j z i,j (6) where z i,j is an element in Z .",
"Contiguity encourages continuous phrase-level alignments 4 (Zenkel et al., 2020), which is helpful for human understandings.",
"Concretely, contiguity prefers Z with rectangular clusters.",
"Thus, we have L 2 := (cid:88) i,j 1 (cid:88) z W zi,j z = 3 (7) where 1 ( ) is the indicator function and W zi,j = { z i,j , z i,j +1 , z i +1 ,j , z i +1 ,j +1 } is a 2 2 window at the position.",
"The loss is based on the observation that if there are three 1s in the window, there must be a non-rectangle region nearby, as marked by red boxes in Figure",
"2. 3 If we use label equality (Feng et al., 2018), which the prediction is only maintained in terms of the label, there are many explanations satisfying the constraint.",
"Using a strict fidelity constraint ensures uniqueness or less variety, making the explanation more faithful.",
"4 Following Lei et al. (2016) and Bastings et al. (2019), a phrase could be any continuous span in a sentence, which may not be a syntactical phrase.",
"Searching the exponential huge ( 2 | p || h | ) solution space of Z straightforwardly is impracticable.",
"To use the gradient-based method, we relax binary Z to be a stochastic matrix Z , and optimize loss expectation over it.",
"Specifically, we assume that every element Z i,j in Z is an independent random variable satisfying HardConcrete distribution (Louizos et al., 2018a).",
"HardConcrete variables are allowed to be exactly discrete 0 and 1, while having continuous and differential probability densities on the open interval (0 , 1) .",
"Additionally, HardConcrete distribution accommodates reparameterization , permitting us to obtain a HardConcrete sample z by transforming a parameter-less unit uniform sample u , i.e., z = g ( u ; ) , where g is differential.",
"Details are shown in Appendix A. Under this setting, we turn to optimize the expectation of the objective.",
"For L 0 , we have L 0 = EU [ (cid:107) M l ( P , H ) M g ( U ; ) l ( P , H ) (cid:107) 2 ] (cid:39) 1 n n (cid:88) i =1 (cid:107) M l ( P , H ) M g ( U i ; ) l ( P , H ) (cid:107) 2 (8) Here, U is a random matrix filled with i.i.d unit uniform variables, R | p || h | + is the parameter of Z .",
"The second line is a Monte-Carlo approximation of the expectation, where n is the sample size, and U i is the i -th sample of U .",
"where (cid:100) Z (cid:101) is the up round of Z and P( ; ) is the probability over the parameter .",
"Now, all the losses are differential over , making gradient descent feasible.",
"Derivation details are presented in Appendix B. After training, we obtain the alignment rationale as follows z i,j = arg max v { 0 , 1 } P( Z i,j = v ; i,j ) (10) 4 Experiments Our experiments include two parts.",
"First, we quantitatively compare the proposed AREC with several typical explanation methods (Section 4.1) to prove the effectiveness of our method.",
"Second, by means of AREC , we study and re-evaluate different models from the aspect of alignment beyond accuracy, revealing potential improvements (Section 4.2).",
"We use four datasets SNLI (Bowman et al., 2015), ESNLI (Camburu et al., 2018), BNLI (Glockner et al., 2018) and HANS as our testbeds.",
"SNLI is a traditional NLI benchmark, while ESNLI extends it by annotating text rationales.",
"BNLI and HANS are stress testing sets to test lexical inference and overlap heuristics respectively.",
"We choose three typical co-attention based NLI models DA 5 (Parikh et al., 2016), ESIM (Chen et al., 2017) and BERT (base version) (Devlin et al., 2019) for our discussion.",
"DA applies the co-attention directly on word embeddings.",
"ESIM further incorporates order information by putting 5 Following Glockner et al. (2018), in our implementation, we discard the optional intra-sentence attention and achieve simlar and comparable accuracy performance.",
"two LSTMs before and after the co-attention layer (Hochreiter and Schmidhuber, 1997) to boost the performance.",
"Differently, BERT concatenates the input sentence pair with a template [CLS] p [SEP] h [SEP] and uses global self-attention (Vaswani et al., 2017).",
"All the models are trained on SNLI training set and tested across datasets.",
"We mask attention matrices for DA and ESIM as described in Section 3.2 since they are directly formed by co-attention.",
"For BERT, we use a single mask to mask co-attention corresponded sub-matrices 6 of all the attention matrices identically, no matter of their layers or attention heads.",
"We consider that faithfulness has a higher priority than readability.",
"Correspondingly, we adjust weights in the loss dynamically, based on fidelity of current mask.",
"To this end, weights are set as 0 = 1 , 1 = 2 = 0 .",
"15 SpAc (11) where SpAc is the accuracy of current sampled masks SpAc = 1 n n (cid:88) i =1 1 [M y ( P , H ) = M g ( U i ; ) y ( P , H )] (12) Here, MZ y is the model predicted label under mask Z .",
"Thus terms related to readability are controlled by the explanation faithfulness.",
"This simple dynamic weight strategy is similar to the approach in Platt and Barr (1988) and highly improves the explanation quality and the algorithm stability.",
"In this section, we aim to evaluate the faithfulness and readability of different explanations.",
"We select feature attribution baselines including co-attention itself, perturbation-based approaches LEAVEONEOUT (Li et al., 2016), LIME (Ribeiro et al., 2016), BACKSELECT (Carter et al., 2019), gradient-based approaches GRADIENT (Simonyan et al., 2014) and INTEGRATGRAD (Sundararajan et al., 2017) and a feature selection method DIFFMASK (De Cao et al., 2020).",
"The original DIFFMASK is applied on text level, we derive an alignment variant for comparison in Appendix C. 6 For a BERT attention map A R ( | p | + | h | +3) ( | p | + | h | +3) , A 2: | p | +1 , | p | +3: | p | + | h | +2 and A | p | +3: | p | + | h | +2 , 2: | p | +1 are coattention corresponded sub-matrices.",
"Inspired by DeYoung et al. (2020), we use Area Over Reservation Curve (AORC) to evaluate faithfulness 7 as follows",
"where Z ( k ) is the mask that reserves top k% co-attention weights from an attribution explanation.",
"Though AREC belongs to feature selection explanations, its parameter also provides importance scores.",
"We also report fidelity defined in Equation (5) as a measure of faithfulness.",
"For readability evaluation, we report compactness and contiguity defined in Equation (6) and Equation (7) respectively.",
"We also conduct human evaluations on random sampled 300 examples from SNLI testing test to directly measure readability.",
"We let 2 annotators to rate how easy the explanation is to read and understand the model's decision-making process along alignments from 1 to 5 points and report the average scores 8 .",
"We admit that metrics including fidelity, compactness and contiguity are that AREC optimizes.",
"Actually it's hard to unitedly evaluate different explanations since their contexts and techniques are usually completely different.",
"If we only follow definitions of those metrics, we consider they are reasonable.",
"Note that these metrics are not compatible for feature attribution explanations.",
"For fair comparison, we follow Carter et al. (2019) to induce alignment rationales by thresholding 9 for feature attribution baselines.",
"That is, we sequentially remain co-attention weights according to attribution scores until the fidelity loss is lower than the pre-defined threshold.",
"Automatic evaluation and readability human evaluation results are shown in Table 1 and Table respectively.",
"We obtain the following findings: 7 We don't use Area Over Perturbation Curve (AOPC) (DeYoung et al., 2020) because our method is to reserve features (i.e., alignments) that keep the prediction, it is fitter to utilize reservation curve.",
"8 Both annotators are well-educated postgraduates major in computer science.",
"We conduct human evaluation on randomly sampled 300 examples in SNLI testing set.",
"9 The threshold is set to L 0 + 0.1 of AREC to obtain alignment rationales with similar fidelity for fair comparison.",
"We don't use fix size constraint to construct rationales as done in Jain et al. (2020) because we think the size of a rationale depends on the instance.",
"1) AREC is quite faithful with the lowest AORC and fidelity value in most cases.",
"Perturbation-based methods are equally matched with moderate performances, while gradient-based ones have the least faithfulness.",
"Surprisingly, co-attention is a very strong baseline to indicate important alignments for NLI, surpassing most other baselines on AORC, extremely for ESIM.",
"This result is of accordance with Vashishth et al. (2019) that attention is more faithful in cross-sentence tasks compared with single-sentence tasks.",
"2) AREC is quite readable which achieves the lowest compactness value and contiguity value in most cases for automatic evaluation.",
"AREC is also the most readable explanation according to human evaluation.",
"As a contrast, feature attribution methods are unable to induce readable alignment rationales.",
"They reserve too much co-attention weights, usually half of which, to ensure similar fidelity with AREC rather than satisfying compactness and contiguity.",
"Appendix E shows some examples for intuitive feelings of different explanations' readabilities.",
"3) Compared to rationale explanation DIFFMASK , AREC is far more promising that outperforms it with huge gaps on fidelity while maintains equivalent or better compactness and contiguity.",
"In our knowledge, DIFFMASK is to globally learn to explain local instances: the explainer is trained on a training set which may contain artifacts and biases (Gururangan et al., 2018; Tsuchiya, 2018; Poliak et al., 2018).",
"Therefore this architecture leverages data information.",
"It is susceptible to over-fitting and generate data-relevant biased explanations as a result, leading to poor fidelity when facing held-out data (BNLI and HANS) as shown in Table",
"1. Moreover, we believe that a faithful explanation is a profile of a model.",
"Correspondingly, an explanation method should only access knowledge from the model instead of from the data.",
"That is an appealing theoretical advantage of our method.",
"Diverse evaluations are pursued to understand models profoundly (Ribeiro et al., 2020).",
"Beyond accuracy, in this section, we analyze DA, ESIM and BERT resorting to our proposed AREC by re-evaluating them from the more fined-grained aspect of alignment.",
"For a model, we first generate its alignment rationales using AREC , then Models Explanations SNLI BNLI HANS Faithfulness Readability Faithfulness Readability Faithfulness Readability AORC FIDE COMP CONT AORC FIDE COMP CONT AORC FIDE COMP CONT DA CO-ATTENTION 0.60 0.45 42.46 131.30 0.46 0.39 30.93 59.85 0.48 0.56 22.88 41.90 LEAVEONEOUT 1.12 0.43 57.78 70.91 1.23 0.34 64.67 65.02 0.95 0.58 66.30 125.06 BACKSELECT 1.15 0.43 57.05 67.08 1.34 0.34 65.19 55.88 1.07 0.58 71.61 137.85 LIME 0.99 0.43 52.80 90.81 1.22 0.34 63.01 71.95 0.81 0.57 48.32 124.71 GRADIENT 1.42 0.42 65.65 135.09 1.73 0.35 74.80 155.50 1.76 0.55 65.69 194.50 INTEGRATGRAD 1.83 0.35 63.87 49.76 2.31 0.25 81.60 44.76 2.37 0.38 70.98 80.43 DIFFMASK 0.54 1.28 2.77 0.21 0.62 1.30 6.86 1.36 0.71 0.97 6.46 1.39 AREC (Ours) 0.47 0.36 6.23 1.40 0.42 0.32 6.83 1.12 0.60 0.50 6.07 0.23 ESIM CO-ATTENTION 0.24 0.29 8.72 4.43 0.55 0.15 15.46 6.555 0.51 0.42 14.40 1.36 LEAVEONEOUT 1.01 0.25 42.88 17.80 1.05 0.16 53.15 23.38 1.05 0.43 56.37 30.76 BACKSELECT 0.90 0.25 41.08 15.73 1.08 0.16 52.32 16.12 0.98 0.43 50.88 27.52 LIME 0.94 0.27 52.46 72.29 1.52 0.16 76.52 57.85 1.29 0.42 73.68 179.10 GRADIENT 2.84 0.20 73.37 109.19 3.51 0.10 83.60 78.83 5.15 0.22 91.05 111.14 INTEGRATGRAD 2.99 0.21 80.32 33.21 3.80 0.15 89.68 13.91 4.45 0.38 91.38 55.63 DIFFMASK 0.51 1.21 3.94 0.26 0.71 2.62 9.77 2.00 0.79 1.89 8.34 1.06 AREC (Ours) 0.40 0.23 4.86 0.70 0.60 0.15 11.02 0.62 0.73 0.36 12.43 0.41 BERT CO-ATTENTION 0.52 0.45 27.91 58.20 0.65 0.34 26.81 46.40 0.61 0.50 29.60 57.68 LEAVEONEOUT 1.00 0.44 45.50 50.05 0.64 0.36 39.82 66.35 0.93 0.48 43.51 58.19 BACKSELECT 0.92 0.45 41.32 42.08 0.69 0.37 40.08 60.90 0.98 0.48 40.94 55.80 LIME 0.82 0.44 39.69 57.69 0.62 0.36 44.01 96.05 0.99 0.46 50.47 92.14 GRADIENT 1.77 0.39 75.58 127.92 4.63 0.16 90.35 74.64 3.59 0.26 90.93 132.30 INTEGRATGRAD 1.45 0.42 59.82 56.57 1.21 0.32 54.30 70.37 2.52 0.31 74.26 90.15 DIFFMASK 0.62 1.00 14.40 7.41 1.61 2.67 19.43 20.17 0.70 0.95 18.95 10.26 AREC (Ours) 0.43 0.36 6.05 2.18 0.47 0.28 8.30 2.65 0.53 0.44 8.56 0.79 Table 1: Evaluation results of explanations across datasets.",
"we evaluate its alignment plausibility (Jacovi and Goldberg, 2020): how well do its alignment rationales agree with human judgments (DeYoung et al., 2020).",
"Since it is established in Section 4.1 that our method is faithful, thus alignment plausibility re-flects a model's power of alignment detection, i.e., whether it makes a prediction with right alignments.",
"Figure 3 illustrates the evaluation process.",
"Firstly, let's have a look at Table 3 that shows the accuracy performances of various models across datasets.",
"Both DA, ESIM and BERT achieve high and tied accuracy performances on SNLI.",
"However, they are distinguished on lexical reasoning, where BERT surpasses others significantly on BNLI.",
"Additionally, neither of them is robust against overlap heuristic, as their performances are extremely poor on non-entailment instances.",
"We seek to uncover the behind reasons (Section 4.2.2) and try to make improvements (Section 4.2.3) using our AREC .",
"We define different metrics to measure alignment plausibility (or equally speaking, alignment rationale agreements with humans) in various datasets.",
"For ESNLI, since it's annotated in the text level, we simply collect corresponding words to convert an alignment rationale to a text rationale for comparison.",
"We adopt IOU-F1 and Token-F1 from DeYoung et al. (2020), and only use a subset of ESNLI whose instances are labeled contradiction for our evaluation 10 .",
"In BNLI, each sentence pair differs by a single word or phrase.",
"Naturally this pair forms up an annotation, which should be counted in a golden alignment rationale.",
"Further, We reasonably presume this pair is the most essential alignment in its corresponding alignment rationale.",
"Thus, three metrics are defined: 1) Max-F1: we remain the alignment with max score from the alignment rationale outputted by AREC according to LEAVEONEOUT .",
"Max-F1 is the F1 measure comparing remaining ones and annotations.",
"2) Exact-Inc: The 10 In ESNLI, every contradiction instance selects words in both the premise and the hypothesis to make up text rationale, fitting with AREC explanations.",
"metric is the proportion that the alignment rationale includes the annotated alignment.",
"3) Soft-Inc: It is a loosed version of Exact-Inc, which is the average recall comparing alignment rationales and annotations.",
"Details are shown in Figure",
"3. We carry out human evaluations on HANS because it is not annotated in any form of rationales.",
"We ask 2 human annotators if (yes/no) the decision process observed by AREC is agreed with them and report averaged agreed ratio 11 (see Appendix D for details).",
"1) Across datasets, alignment plausibilities are consistent with the accuracy performances in different degrees.",
"Especially on BNLI, where BERT surpasses other competitors on all metrics substantially, quantitatively revealing that the alignment detection ability is important and distinguishes NLI models.",
"We also discover that modeling order information explicitly is also useful for NLI, where ESIM achieves a better accuracy even with a poorer alignment plausibility on SNLI compared to DA.",
"2) Our explanation method is helpful to detect artifacts or biases leveraged by the model.",
"For example, though obtaining high accuracy on HANSE , DA's low alignment plausibility suggests it usually makes a right prediction with wrong alignments (see Appendix D for examples).",
"Further, all the models are brittle on catching reasonable alignments when facing non-entailment instances in HANS.",
"As we will discuss next, they tend do shallow literal lexical matching, which we conjecture the reason why they also fail on accuracy.",
"In summary, the ability to capture correct alignments is closely related to accuracy performance in NLI.",
"This conclusion is often discussed qualitatively in previous works.",
"But we are the first to illustrate and prove this point exhaustively via quantitative evaluation.",
"With our AREC , we find that both three models tend to align overlapped words between the sentence pair no matter their syntactical or semantic roles, causing wrong predictions in HANS.",
"Figure 4 presents an example, where the model mistakenly matches identical words.",
"However, president in the premise and doctor in the hypothesis are subjects of the same predicate advised , they should be aligned, and so do doctor in the premise and president in the hypothesis.",
"To remedy this, we turn to Semantic Role Labeling (SRL), the task to recognize arguments for a predicate and assign semantic role labels to them, Methods HANS Entailment Non-Entailment Avg Lex Sub Cons Lex Sub Cons DA 97.18 96.02 97.62 2.66 1.76 3.00 49.71 ESIM 99.68 98.76 99.60 0.18 0.12 4.22 50.43 BERT 98.82 100.00 99.86 43.02 2.94 3.82 58.08 DASRL GUID 93.66 96.64 96.36 88.24 25.88 3.28 67.34 ESIMSRL GUID 93.94 96.76 99.42 99.10 32.28 5.30 71.13 BERTSRL GUID 96.24 99.36 99.74 96.26 29.44 0.24 70.21 BERT SRL MTL 91.00 98.00 95.00 71.00 13.00 25.00 66.00 Table 4: Accuracy performances of different models across different datasets.",
"to guide alignments for NLI models.",
"In particular, we employ an off-the-shelf BERT-based SRL model (Shi and Lin, 2019) to extract predicates and their corresponding arguments from the premise and the hypothesis in advance.",
"Then we limit the model to only align identical predicates and phrases with identical semantic roles by applying a corresponding co-attention mask (SRL mask), as presented in Figure",
"4. In this way the semantic role information is injected into the model.",
"Note that there is no need to modify the model architecture or design new training protocol, contrary to Cengiz and Yuret (2020) who jointly train NLI and SRL in a multi-task learning (MTL) manner.",
"We report model accuracy performances when alignments are guided by SRL masks (subscripted with SRL GUID ) in Table",
"4. The results show that without obvious performance drops on entailment instances, applying SRL masks gains significant improvements on non-entailment instances, especially for lexical heuristic.",
"Nevertheless, it doesn't boost model performances for constituent heuristic.",
"We speculate that is because constituent heuristic instances are accompanied with restrictions such as prepositions, which is unable to handle only with alignments.",
"Overall, the results show that guiding alignments is a potential promising way to incorporate useful information.",
"Additionally, this also proves that our method is faithful towards models from another point of view.",
"In this work, we propose AREC , a new post-hoc method to generate alignment rationale for co-attention based NLI models.",
"Experimental results show that our explanation is faithful and readable.",
"We study typical models using our method and shed lights on potential improvements.",
"We believe our method and findings are illuminating for NLI.",
"For future works, we plan to explore model-agnostic alignment explanations, and analyze models in other NLP tasks.",
"This work was supported by the National Key Research and Development Program of China (No.2020AAA0105200), the National Natural Science Foundation of China (No. 61922085, 61831022, 61906196).",
"This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301), the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006) and independent research project of National Laboratory of Pattern Recognition."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"result",
"objective",
"method",
"abstain",
"objective",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"method",
"objective",
"other",
"other"
] |
[
"What makes some types of languages more probable than others?",
"For instance, we know that almost all spoken languages contain the vowel phoneme /i/; why should that be?",
"The field of linguistic typology seeks to answer these questions and, thereby, divine the mechanisms that underlie human language.",
"In our work, we tackle the problem of vowel system typology, i.e., we propose a generative probability model of which vowels a language contains.",
"In contrast to previous work, we work directly with the acoustic informationthe first two formant valuesrather than modeling discrete sets of phonemic symbols (IPA).",
"We develop a novel generative probability model and report results based on a corpus of 233 languages.",
"Human languages are far from arbitrary; cross-linguistically, they exhibit surprising similarity in many respects and many properties appear to be universally true.",
"The field of linguistic typology seeks to investigate, describe and quantify the axes along which languages vary.",
"One facet of language that has been the subject of heavy investigation is the nature of vowel inventories, i.e., which vowels a language contains.",
"It is a cross-linguistic universal that all spoken languages have vowels (Gordon, 2016), and the underlying principles guiding vowel selection are understood: vowels must be both easily recognizable and well-dispersed (Schwartz et al., 2005).",
"In this work, we offer a more formal treatment of the subject, deriving a generative probability model of vowel inventory typology.",
"Our work builds on (Cotterell and Eisner, 2017) by investigating not just discrete IPA inventories but the cross-linguistic variation in acoustic formants.",
"The philosophy behind our approach is that linguistic typology should be treated probabilistically and its goal should be the construction of a universal prior over potential languages.",
"A probabilistic approach does not rule out linguistic systems completely (as long as one's theoretical formalism can describe them at all), but it can position phenomena on a scale from very common to very improbable.",
"Probabilistic modeling also provides a discipline for drawing conclusions from sparse data.",
"While we know of over 7000 human languages, we have some sort of linguistic analysis for only 2300 of them (Comrie et al., 2013), and the dataset used in this paper (Becker-Kristal, 2010) provides simple vowel data for fewer than 250 languages.",
"Formants are the resonant frequencies of the human vocal tract during the production of speech sounds.",
"We propose a Bayesian generative model of vowel inventories, where each language's inventory is a finite subset of acoustic vowels represented as points ( F 1 , F 2 ) R 2 .",
"We deploy tools from the neural-network and point-process literatures and experiment on a dataset with 233 distinct languages.",
"We show that our most complicated model outperforms simpler models.",
"Much of human communication takes place through speech: one conversant emits a sound wave to be comprehended by a second.",
"In this work, we consider the nature of the portions of such sound waves that correspond to vowels.",
"We briefly review the relevant bits of acoustic phonetics so as to give an overview of the data we are actually modeling and develop our notation.",
"The anatomy of a sound wave.",
"The sound wave that carries spoken language is a function from time to amplitude, describing sound pressure variation in the air.",
"To distinguish vowels, it is helpful to transform this function into a spectrogram (Fig. 1) by using a short-time Fourier transform 37 0 Hz 1000 Hz 2000 Hz 3000 Hz 4000 Hz 5000 Hz /i/ /u/ / / Figure 1 : Example spectrogram of the three English vowels: /i/, /u/ and / A /.",
"The x -axis is time and y -axis is frequency.",
"The first two formants F 1 and F 2 are marked in with arrows for each vowel.",
"The figure was made with Praat (Boersma et al., 2002).",
"(Deng and O'Shaughnessy, 2003, Chapter 1) to decompose each short interval of the wave function into a weighted sum of sinusoidal waves of different frequencies (measured in Hz).",
"At each interval, the variable darkness of the spectrogram indicates the weights of the different frequencies.",
"In phonetic analysis, a common quantity to consider is a formant a local maximum of the (smoothed) frequency spectrum.",
"The fundamental frequency F 0 determines the pitch of the sound.",
"The formants F 1 and F 2 determine the quality of the vowel.",
"Two is all you need (and what we left out).",
"In terms of vowel recognition, it is widely speculated that humans rely almost exclusively on the first two formants of the sound wave (Ladefoged, 2001, Chapter 5).",
"The two-formant assumption breaks down in edge cases: e.g., the third formant F 3 helps to distinguish the roundness of the vowel (Ladefoged, 2001, Chapter 5).",
"Other non-formant features may also play a role.",
"For example, in tonal languages, the same vowel may be realized with different tones (which are signaled using F 0 ): Mandarin Chinese makes a distinction between m a ( horse ) and ma ( hemp ) without modifying the quality of the vowel /a/.",
"Other features, such as creaky voice, can play a role in distinguishing phonemes.",
"We do not explicitly model any of these aspects of vowel space, limiting ourselves to ( F 1 , F 2 ) as in previous work (Liljencrants and Lindblom, 1972).",
"However, it would be easy to extend all the models we will propose here to incorporate such information, given appropriate datasets.",
"The vowel inventories of the world's languages display clear structure and appear to obey several underlying principles.",
"The most prevalent of these principles are focalization and dispersion .",
"Focalization.",
"The notion of focalization grew out of quantal vowel theory (Stevens, 1989).",
"Quantal vowels are those that are phonetically better than others.",
"They tend to display certain properties, e.g., the formants tend to be closer together (Stevens, 1987).",
"Cross-linguistically, quantal vowels are the most frequently attested vowels, e.g., the cross-linguistically common vowel /i/ is considered quantal, but less common /y/ is not.",
"Dispersion.",
"The second core principle of vowel system organization is known as dispersion.",
"As the name would imply, the principle states that the vowels in good vowel systems tend to be spread out.",
"The motivation for such a principle is cleara well-dispersed set of vowels reduces a listener's potential confusion over which vowel is being pronounced.",
"See Schwartz et al. (1997) for a review of dispersion in vowel system typology and its interaction with focalization, which has led to the joint dispersion-focalization theory.",
"Notation.",
"We will denote the universal set of international phonetic alphabet (IPA) symbols as V .",
"The observed vowel inventory for language has size n and is denoted V = { ( v 1 , v 1 ) , . . . , ( v n , v n ) } V R d , where for each k [1 , n ] , v k V is an IPA symbol assigned by a linguist and v k R d is a vector of d measurable phonetic quantities. In short, the IPA symbol v k was assigned as a label for a phoneme with pronunciation v k . The ordering of the elements within V is arbitrary. Goals. This framework recognizes that the same IPA symbol v (such as /u/) may represent a slightly different sound v in one language than in another, although they are transcribed identically. We are specifically interested in how the vowels in a language influence one another's fine-grained pronunciation in R d . In general, there is no reason to suspect that speakers of two languages, whose phonological systems contain the same IPA symbol, should produce that vowel with identical formants. Data. For the remainder of the paper, we will take d = 2 so that each v = ( F 1 , F 2 ) R 2 , the vector consisting of the first two formant values, as compiled from the field literature by Becker-Kristal (2006). This dataset provides inventories V in the form above. Thus, we do not consider further variation of the vowel pronunciation that 38 may occur within the language (between speakers, between tokens of the vowel, or between earlier and later intervals within a token). 4 Phonemes versus Phones Previous work (Cotterell and Eisner, 2017) has placed a distribution over discrete phonemes, ignoring the variation across languages in the pronunciation of each phoneme. In this paper, we crack open the phoneme abstraction, moving to a learned set of finer-grained phones. Cotterell and Eisner (2017) proposed (among other options) using a determinantal point process (DPP) over a universal inventory V of 53 symbolic (IPA) vowels. A draw from such a DPP is a language-specific inventory of vowel phonemes , V V . In this paper, we say that a language instead draws its inventory from a larger set V , again using a DPP. In both cases, the reason to use a DPP is that it prefers relatively diverse inventories whose individual elements are relatively quantal. While we could in principle identify V with R d , for convenience we still take it to be a (large) discrete finite set V = { v 1 , . . . , v N } , whose elements we call phones . V is a learned cross-linguistic parameter of our model; thus, its elementsthe uni-versal phonesmay or may not correspond to phonetic categories traditionally used by linguists. We presume that language draws from the DPP a subset V V , whose size we call n . For each universal phone v i that appears in this inventory V , the language then draws an observable language-specific pronunciation v i N (cid:0) i , 2 I (cid:1) from a distribution associated cross-linguistically with the universal phone v i . We now have an inventory of pronunciations. As a final step in generating the vowel inventory, we could model IPA labels. For each v i V , a field linguist presumably draws the IPA label v i conditioned on all the pronunciations { v i R d : v i V } in the inventory (and perhaps also on their underlying phones v i V ). This labeling process may be complex. While each pronunciation in R d (or each underlying phone in V ) may have a preference for certain IPA labels in V , the n labels must be drawn jointly because the linguist will take care not to use the same label for two phones, and also because the linguist may like to describe the inventory using a small number of distinct IPA features, which will tend to favor factorial grids of symbols. The linguist's use of IPA features may also be informed by phonological and phonetic processes in the language.",
"We leave modeling of this step to future work; so our current likelihood term ignores the evidence contributed by the IPA labels in the dataset, considering only the pronunciations in R d .",
"The overall idea is that human languages draw their inventories from some universal prior, which we are attempting to reconstruct.",
"A caveat is that we will train our method by maximum-likelihood, which does not quantify our uncertainty about the reconstructed parameters.",
"An additional caveat is that some languages in our dataset are related to one another, which belies the idea that they were drawn independently.",
"Ideally, one ought to capture these relationships using hierarchical or evolutionary modeling techniques.",
"Before delving into our generative model, we briefly review technical background used by Cotterell and Eisner (2017).",
"A DPP is a probability distribution over the subsets of a fixed ground set of size N in our case, the set of phones V .",
"The DPP is usually given as an L -ensemble (Borodin and Rains, 2005), meaning that it is parameterized by a positive semi-definite matrix L RN N .",
"Given a discrete base set V of phones, the probability of a subset V V is given by p ( V ) det ( L V ) , (1) where L V is the submatrix of L corresponding to the rows and columns associated with the subset V V .",
"The entry L ij , where i 6 = j , has the effect of describing the similarity between the elements v i and v j (both in V )an ingredient needed to model dispersion.",
"And, the entry L ii describes the qualityfocalizationof the vowel v i , i.e., how much the model wants to have v i in a sampled set independent of the other members.",
"In this work, each phone v i V is associated with a probability density over the space of possible pronunciations R 2 .",
"Our measure of phone similarity will consider the overlap between the densities associated with two phones.",
"This works as follows: Given two densities f ( x, y ) and f 0 ( x, y ) over R 2 , we define the kernel (Jebara et al., 2004) as K ( f, f 0 ; ) = Z x Z y f ( x, y ) f 0 ( x, y ) dx dy, (3) 39 MY =1 h p ( v , 1 , . . . , v ,n | 1 , . . . , N , N ) i p ( 1 , . . . N | N ) p ( N ) (2) = MY =1 \" X a A ( n ,N ) n Y k =1 p ( v ,k | a k ) | {z } 4 ! p ( V ( a ) | 1 , . . . , N , N ) | {z } 3 # p ( 1 , . . . N | N ) | {z } 2 p ( N ) | {z } 1 Figure 2 : Joint likelihood of M vowel systems under our deep generative probability model for continuous-space vowel inventories.",
"Here language has an observed inventory of pronunciations { v ,k : 1 k n } , and a k [1 , N ] denotes a phone that might be responsible for the pronunciation v ,k .",
"Thus, a denotes some way to jointly label all n pronunciations with distinct phones.",
"We must sum over all (cid:0) Nn (cid:1) such labelings a A ( n , N ) since the true labeling is not observed.",
"In other words, we sum over all ways a of completing the data for language .",
"Within each summand, the product of factors 3 and 4 is the probability of the completed data, i.e., the joint probability of generating the inventory V ( a ) of phones used in the labeling and their associated pronunciations.",
"Factor 3 considers the prior probability of V ( a ) under the DPP, and factor 4 is a likelihood term that considers the probability of the associated pronunciations.",
"In our setting, f, f 0 will both be Gaussian distributions with means and 0 that share a fixed spherical covariance matrix 2 I .",
"Then eq.",
"(3) and indeed its generalization to any R d has a closed-form solution (Jebara et al., 2004, 3.1): K ( f,f 0 ; ) = (4) (2 ) d 2 (cid:0) 2 2 (cid:1) (1 2 ) d 2 exp (cid:18) || 0 || 2 4 2 (cid:19) .",
"Notice that making small (i.e., high temperature) has an effect on (4) similar to scaling the variance 2 by the temperature, but it also results in changing the scale of K , which affects the balance between dispersion and focalization in (6) below.",
"The probability kernel given in eq.",
"(3) naturally handles the linguistic notion of dispersion.",
"What about focalization?",
"We say that a phone is focal to the extent that it has a high score F ( ) = exp ( U 2 tanh( U 1 + b 1 ) + b 2 ) > 0 (5) where is the mean of its density.",
"To learn the parameters of this neural network from data is to learn which phones are focal.",
"We use a neural network since the focal regions of R 2 are distributed in a complex way.",
"K Since L is the sum of two positive definite matrices (the first specializes a known kernel and the second is diagonal and positive), it is also positive definite.",
"As a result, it can be used to parameterize a DPP over V .",
"Indeed, since L is positive definite and not merely positive semidefinite, it will assign positive probability to any subset of V .",
"As previously noted, this DPP does not define a distribution over an infinite set, e.g., the powerset of R 2 , as does recent work on continuous DPPs (Affandi et al., 2013).",
"Rather, it defines a distribution over the powerset of a set of densities with finite cardinality .",
"Once we have sampled a subset of densities, a real-valued quantity may be additionally sampled from each sampled density.",
"We are now in a position to expound our generative model of continuous-space vowel typology.",
"We 40 generate a set of formant pairs for M languages in a four step process.",
"Note that throughout this exposition, language-specific quantities with be superscripted with an integral language marker , whereas universal quantities are left unsuper-scripted.",
"The generative process is written in algorithmic form in Alg.",
"1. Note that each step is numbered and color-coded for ease of comparison with the full joint likelihood in Fig.",
"2. Step 1 : p ( N ) .",
"We sample the size N of the universal phone inventory V from a Poisson distribution with a rate parameter , i.e., N Poisson ( ) .",
"That is, we do not presuppose a certain number of phones in the model.",
"Step 2 : p ( 1 , . . . , N ) .",
"Next, we sample the means i of the Gaussian phones.",
"In the model presented here, we assume that each phone is generated independently, so p ( 1 , . . . , N ) = Q Ni =1 p ( i ) .",
"Also, we assume a standard Gaussian prior over the means, i N ( 0 , I ) .",
"The sampled means define our N Gaussian phones N (cid:0) i , 2 I (cid:1) : we are assuming for simplicity that all phones share a single spherical covariance matrix, defined by the hyperparameter 2 .",
"The dispersion and focalization of these phones define the matrix L according to equations (4)(6), where in (4) and the weights of the focalization neural net (5) are also hyperparameters.",
"Step 3 : p ( V | 1 , . . . , N ) .",
"Next, for each language [1 , . . . , M ] , we sample a diverse subset of the N phones, via a single draw from a DPP parameterized by matrix L : V DPP ( L ) , (8) where V [1 , N ] .",
"Thus, i V means that language contains phone v i .",
"Note that even the size of the inventory, n = | V | , was chosen by the DPP.",
"In general, we have n (cid:28) N .",
"Step 4 : Q i V p ( v i | i ) The final step in our generative process is that the phones v i in language must generate the pronunciations v i R 2 (for-mant vectors) that are actually observed in language .",
"Each vector takes two steps.",
"For each i V , we generate an underlying v i R 2 from the corresponding Gaussian phone.",
"Then, we run this vector through a feed-forward neural network with parameters .",
"In short: v i N ( i , 2 I ) (9) v i = ( v i ) , (10) where the second step is deterministic.",
"We can fuse these two steps into a single step p ( v i | i ) , whose closed-form density is given in eq.",
"(12) below.",
"In effect, step 4 takes a Gaussian phone as input and produces the observed formant vector with an underlying formant vector in the middle.",
"This completes our generative process.",
"We do not observe all the steps, but only the final collection of pronunciations v i for each language, where the subscripts i that indicate phone identity have been lost.",
"The probability of this incomplete dataset involves summing over possible phones for each pronunciation, and is presented in Fig.",
"2. 6.1 A Neural Transformation of a Gaussian A crucial bit of our model is running a sample from a Gaussian through a neural network.",
"Under certain restrictions, we can find a closed form for the resulting density; we discuss these below.",
"Let be a depth-2 multi-layer perceptron ( v i ) = W 2 tanh ( W 1 v i + b 1 ) + b 2 .",
"In order to find a closed-form solution, we require that (5) be a diffeomorphism, i.e., an invertible mapping from R 2 R 2 where both and its inverse 1 are differentiable.",
"This will be true as long as W 1 , W 2 R 2 2 are square matrices of full-rank and we choose a smooth, invertible activation function, such as tanh .",
"Under those conditions, we may apply the standard theorem for transforming a random variable (see Stark and Woods, 2011): p ( v i | i ) = p ( 1 ( v i ) | i ) det J 1 ( v i ) = p ( v i | i ) det J 1 ( v i ) (12) where J 1 ( x ) is the Jacobian of the inverse of the neural network at the point x .",
"Recall that p ( v i | i ) is Gaussian-distributed.",
"Imbued in our generative story are a number of assumptions about the linguistic processes behind vowel inventories.",
"We briefly draw connections between our theory and the linguistics literature.",
"Why underlying phones?",
"A technical assumption of our model is the existence of a universal set of underlying phones.",
"Each phone is equipped with a probability distribution over reported acoustic measurements (pronunciations), to allow for a single phone to account for multiple slightly different pronunciations in different languages (though never in the same language).",
"This distribution can capture both actual interlingual variation and also random noise in the measurement process.",
"While our universal phones may seem to resemble the universal IPA symbols used in phonological transcription, they lack the rich featural specifications of such phonemes.",
"A phone in our model has no features other than its mean position, which wholly determines its behavior.",
"Our universal phones are not a substantive linguistic hypothesis, but are essentially just a way of partitioning R 2 into finitely many small regions whose similarity and focalization can be precomputed.",
"This technical trick allows us to use a discrete rather than a continuous DPP over the R 2 space.",
"1 Why a neural network?",
"Our phones are Gaussians of spherical variance 2 , presumed to be scattered with variance 1 about a two-dimensional latent vowel space.",
"Distances in this latent space are used to compute the dissimilarity of phones for modeling dispersion, and also to describe the phone's ability to vary across languages.",
"That is, two phones that are distant in the latent space can appear in the same inventorypresumably they are easy to discriminate in both perception and articulationand it is easy to choose which one better explains an acoustic measurement, thereby affecting the other measurements that may appear in the inventory.",
"We relate this latent space to measurable acoustic space by a learned diffeomorphism (Cotterell and Eisner, 2017).",
"1 can be regarded as warping the acoustic distances into perceptual/articulatory distances.",
"In some high-resolution regions of acoustic space, phones with fairly similar ( F 1 , F 2 ) values might yet be far apart in the latent space.",
"Conversely, in other regions, relatively large acous-1 Indeed, we could have simply taken our universal phone set to be a huge set of tiny, regularly spaced overlapping Gaussians that covered (say) the unit circle.",
"As a computational matter, we instead opted to use a smaller set of Gaussians, giving the learner the freedom to infer their positions and tune their variance 2 .",
"Because of this freedom, this set should not be too large, or a MAP learner may overfit the training data with zero-variance Gaussians and be unable to explain the test languagessimilar to overfitting a Gaussian mixture model.",
"tic changes in some direction might not prevent two phones from acting as similar or two pronunciations from being attributed to the same phone.",
"In general, a unit circle of radius in latent space may be mapped by to an oddly shaped connected region in acoustic space, and a Gaussian in latent space may be mapped to a multimodal distribution.",
"We fit our model via MAP-EM (Dempster et al., 1977).",
"The E-step involves deciding which phones each language has.",
"To achieve this, we fashion a Gibbs sampler (Geman and Geman, 1984), yielding a Markov-Chain Monte Carlo E-step (Levine and Casella, 2001).",
"Inference in our model is intractable even when the phones 1 , . . . , N are fixed.",
"Given a language with n vowels, we have to determine which subset of the N phones best explains those vowels.",
"As discussed above, the alignment a between the n vowels and n of the N phones represents a latent variable.",
"Marginalizing it out is #P-hard, as we can see that it is equivalent to summing over all bipartite matchings in a weighted graph, which, in turn, is as costly as computing the permanent of a matrix (Valiant, 1979).",
"Our sampler 2 is an approximation algorithm for the task.",
"We are interested in sampling a , the labeling of observed vowels with universal phones.",
"Note that this implicitly samples the language's phone inventory V ( a ) , which is fully determined by a .",
"Specifically, we employ an MCMC method closely related to Gibbs sampling.",
"At each step of the sampler, we update our vowel-phone alignment a as follows. Choose a language and a vowel index k [1 , n ] , and let i = a k (that is, pronunciation v ,k is currently labeled with universal phone v i ). We will consider changing a k to j , where j is drawn from the ( N n ) phones that do not appear in V ( a ) , heuristically choosing j in proportion to the likelihood p ( v ,k | j ) . We then stochastically decide whether to keep a k = i or set a k = j in proportion to the resulting values of the product 4 3 in eq. (2). For a single E-step, the Gibbs sampler warm-starts with the labeling from the end of the previous iteration's E-step. It sweeps S = 5 times 2 Taken from Volkovs and Zemel (2012, 3.1). 42 through all vowels for all languages, and returns S sampled labelings, one from the end of each sweep. We are also interested in automatically choosing the number of phones N , for which we take the Poisson's rate parameter = 100 . To this end, we employ reversible-jump MCMC (Green, 1995), resampling N at the start of every E-step. 8.2 Learning: M-Step Given the set of sampled alignments provided by the E-step, our M-step consists of optimizing the log-likelihood of the now-complete training data using the inferred latent variables. We achieved this through SGD training of the diffeomorphism parameters , the means i of the Gaussian phones, and the parameters of the focalization kernel F . 9 Experiments 9.1 Data Our data is taken from the Becker-Kristal corpus (Becker-Kristal, 2006), which is a compilation of various phonetic studies and forms the largest multilingual phonetic database. Each entry in the corpus corresponds to a linguist's phonetic description of a language's vowel system: an inventory consisting of IPA symbols where each symbol is associated with two or more formant values. The corpus contains data from 233 distinct languages. When multiple inventories were available for the same language (due to various studies in the literature), we selected one at random and discarded the others. 9.2 Baselines Baseline #1: Removing dispersion. The key technical innovation in our work lies in the incorporation of a DPP into a generative model of vowel formantsa continuous-valued quantity. The role of the DPP was to model the linguistic principle of dispersionwe may cripple this portion of our model, e.g., by forcing K to be a diagonal kernel, i.e., K ij = 0 for i 6 = j . In this case the DPP becomes a Bernoulli Point Process (BPP)a special case of the DPP. Since dispersion is widely accepted to be an important principle governing naturally occurring vowel systems, we expect a system trained without such knowledge to perform worse. Baseline #2: Removing the neural network . Another question we may ask of our formulation is whether we actually need a fancy neural mapping to model our typological data well. The human perceptual system is known to perform a non-linear transformation on acoustic signals, starting with the non-linear cochlear transform that is physically performed in the ear. While 1 is intended as loosely analogous, we determine its benefit by removing eq. (10) from our generative story, i.e., we take the observed formants v k to arise directly from the Gaussian phones. Baseline #3: Supervised phones and alignments. A final baseline we consider is supervised phones. Linguists standardly employ a finite set of phones symbols from the international phonetic alphabet (IPA). In phonetic annotation, it is common to map each sound in a language back to this universal discrete alphabet. Under such an annotation scheme, it is easy to discern, cross-linguistically, which vowels originate from the same phoneme: an / I / in German may be roughly equated with an / I / in English. However, it is not clear how consistent this annotation truly is. There are several reasons to expect high-variance in the cross-linguistic acoustic signal. First, IPA symbols are primarily useful for interlinked phonological distinctions, i.e., one applies the symbol / I / to distinguish it from /i/ in the given language, rather than to associate it with the sound bearing the same symbol in a second language. Second, field linguists often resort to the closest common IPA symbol, rather than an exact match: if a language makes no distinction between /i/ and / I /, it is more common to denote the sound with a /i/. Thus, IPA may not be as universal as hoped. Our dataset contains 50 IPA symbols so this baseline is only reported for N = 50 . 9.3 Evaluation Evaluation in our setting is tricky. The scientific goal of our work is to place a bit of linguistic theory on a firm probabilistic footing, rather than a downstream engineering-task, whose performance we could measure. We consider three metrics. Cross-Entropy. Our first evaluation metric is cross-entropy: the average negative log-probability of the vowel systems in held-out test data, given the universal inventory of N phones that we trained through EM. We find this to be the cleanest method for scientific evaluationit is the metric of optimization and has a clear interpretation: how surprised was the model to see the vowel systems of held-out, but attested, languages? The cross-entropy is the negative log of the Q (cid:2) (cid:3) expression in eq. (2), with now rang-43 N metric DPP + BPP + DPP Sup. x-ent 540.02 540.05 600.34 7 15 cloze1 5.76 5.76 6.53 7 cloze12 4.89 4.89 5.24 7 x-ent 280.47 275.36 335.36 7 25 cloze1 5.04 5.25 6.23 7 cloze12 4.76 4.97 5.43 7 x-ent 222.85 231.70 320.05 1610.37 50 cloze1 3.38 3.16 4.02 4.96 cloze12 2.73 2.93 3.04 6.95 x-ent 212.14 220.42 380.31 7 57 cloze1 2.21 3.08 3.25 7 cloze12 2.01 3.05 3.41 7 x-ent 271.95 301.45 380.02 7 100 cloze1 2.26 2.42 3.03 7 cloze12 1.96 2.01 2.51 7 Table 1 : Cross-entropy in nats per language (lower is better) and expected Euclidean-distance error of the cloze prediction (lower is better). The overall best value for each task is boldfaced. The case N = 50 is compared against our supervised baseline. The N = 57 row is the case where we allowed N to fluctuate during inference using reversible-jump MCMC; this was the N value selected at the final EM iteration. ing over held-out languages. 3 Wallach et al. (2009) give several methods for estimating the intractable sum in language . We use the simple harmonic mean estimator, based on 50 samples of a drawn with our Gibbs sampler (warm-started from the final E-step of training). Cloze Evaluation. In addition, following Cotterell and Eisner (2017), we evaluate our trained model's ability to perform a cloze task (Taylor, 1953). Given n 1 or n 2 of the vowels in held-out language , can we predict the pronunciations v k of the remaining 1 or 2? We predict v k to be ( i ) where i = a k is the phone inferred by the sampler. Note that the sampler's inference here is based only on the observed vowels (the likelihood) and the focalization-dispersion preferences of the DPP (the prior). We report the expected error of such a predictionwhere error is quantified by Euclidean distance in ( F 1 , F 2 ) formant spaceover the same 50 samples of a . For instance, consider a previously unseen vowel system with formant values { (499 , 2199) , (861 , 1420) , (571 , 1079) } . A cloze1 evaluation would aim to predict { (499 , 2199) } as the missing 3 Since that expression is the product of both probability distributions and probability densities, our cross-entropy metric is actually the sum of both entropy terms and (poten-tially negative) differential entropy terms. Thus, a value of 0 has no special significance. 0 200 400 600 800 1000 1200 0 500 1000 1500 2000 2500 3000 Figure 3 : A graph of v = ( F 1 , F 2 ) in the union of all the training languages' inventories, color-coded by inferred phone ( N = 50 ).",
"vowel, given { (861 , 1420) , (571 , 1079) } , and the fact that n = 3 .",
"A cloze12 evaluation would aim to predict two missing vowels.",
"Here, we report experimental details and the hy-perparameters that we use to achieve the results reported.",
"We consider a neural network with k [1 , 4] layers and find k = 1 the best performer on development data.",
"Recall that our diffeomorphism constraint requires that each layer have exactly two hidden units, the same as the number of observed formants.",
"We consider N { 15 , 25 , 50 , 100 } phones as well as letting N fluctuate with reversible-jump MCMC (see footnote 1).",
"We train for 100 iterations of EM, taking S = 5 samples at each E-step.",
"At each M-step, we run 50 iterations of SGD for the focalization NN and also for the diffeomorphism NN.",
"For each N , we selected ( 2 , ) by minimizing cross-entropy on a held-out development set.",
"We considered ( 2 , ) { 10 k } 5 k =1 { k } 5 k =1 .",
"We report results in Tab.",
"1. We find that our DPP model improves over the baselines.",
"The results support two claims:",
"(i) dispersion plays an important role in the structure of vowel systems and",
"(ii) learning a non-linear transformation of a Gaussian improves our ability to model sets of formant-pairs.",
"Also, we observe that as we increase the number of phones, the role of the DPP becomes more important.",
"We visualize a sample of the trained alignment in Fig. 3.",
"Frequency Encodes Dispersion.",
"Why does dispersion not always help?",
"The models with fewer phones do not reap the benefits that the models with more phones do.",
"The reason lies in the fact that the most common vowel formants are already dispersed.",
"This indicates that we still have not quite modeled the mechanisms that select for good vowel formants, despite our work at the phonetic level; further research is needed.",
"We would prefer a model that explains the evolutionary motivation of sound systems as communication systems.",
"Number of Induced Phones.",
"What is most salient in the number of induced phones is that it is close to the number of IPA phonemes in the data.",
"However, the performance of the phoneme-supervised system is much worse, indicating that, perhaps, while the linguists have the right idea about the number of universal symbols, they did not specify the correct IPA symbol in all cases.",
"Our data analysis indicates that this is often due to pragmatic concerns in linguistic field analysis.",
"For example, even if / I / is the proper IPA symbol for the sound, if there is no other sound in the vicinity the annotator may prefer to use more common /i/.",
"Most closely related to our work is the classic study of Liljencrants and Lindblom (1972), who provide a simulation-based account of vowel systems.",
"They argued that minima of a certain objective that encodes dispersion should correspond to canonical vowel systems of a given size n .",
"Our tack is different in that we construct a generative probability model, whose parameters we learn from data.",
"However, the essence of modeling is the same in that we explain formant values, rather than discrete IPA symbols.",
"By extension, our work is also closely related to extensions of this theory (Schwartz et al., 1997; Roark, 2001) that focused on incorporating the notion of focalization into the experiments.",
"Our present paper can also be regarded as a continuation of Cotterell and Eisner (2017), in which we used DPPs to model vowel inventories as sets of discrete IPA symbols.",
"That paper pretended that each IPA symbol had a single cross-linguistic ( F 1 , F 2 ) pair, an idealization that we remove in this paper by discarding the IPA symbols and modeling formant values directly.",
"Our model combines existing techniques of probabilistic modeling and inference to attempt to fit the actual distribution of the world's vowel systems.",
"We presented a generative probability model of sets of measured ( F 1 , F 2 ) pairs.",
"We view this as a necessary step in the development of generative probability models that can explain the distribution of the world's languages.",
"Previous work on generating vowel inventories has focused on how those inventories were transcribed into IPA by field linguists, whereas we focus on the field linguists' acoustic measurements of how the vowels are actually pronounced.",
"We would like to acknowledge Tim Vieira, Katharina Kann, Sebastian Mielke and Chu-Cheng Lin for reading many early drafts.",
"The first author would like to acknowledge an NDSEG grant and a Facebook PhD fellowship.",
"This material is also based upon work supported by the National Science Foundation under Grant No. 1718846 to the last author."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Recently, the character-word lattice structure has been proved to be effective for Chinese named entity recognition (NER) by incorporating the word information.",
"However, since the lattice structure is complex and dynamic, most existing lattice-based models are hard to fully utilize the parallel computation of GPUs and usually have a low inference-speed.",
"In this paper, we propose FLAT : F latLA ttice T ransformer for Chinese NER, which converts the lattice structure into a flat structure consisting of spans.",
"Each span corresponds to a character or latent word and its position in the original lattice.",
"With the power of Transformer and well-designed position encoding, FLAT can fully leverage the lattice information and has an excellent parallelization ability.",
"Experiments on four datasets show FLAT outperforms other lexicon-based models in performance and efficiency.",
"Named entity recognition (NER) plays an indispensable role in many downstream natural language processing (NLP) tasks (Chen et al., 2015; Diefenbach et al., 2018).",
"Compared with English NER (Lample et al., 2016; Yang et al., 2017; Liu et al., 2017; Sun et al., 2020), Chinese NER is more difficult since it usually involves word segmentation.",
"Recently, the lattice structure has been proved to have a great benefit to utilize the word information and avoid the error propagation of word segmentation (Zhang and Yang, 2018).",
"We can match a sentence with a lexicon to obtain the latent words in it, and then we get a lattice like in Figure",
"1(a).",
"The lattice is a directed acyclic graph, where each node is a character or a latent word.",
"The lattice includes a sequence of characters and potential Corresponding author.",
"words in the sentence.",
"They are not ordered sequentially, and the word's first character and last character determine its position.",
"Some words in lattice may be important for NER.",
"For example, in Figure",
"1(a), (Renhe Pharmacy) can be used to distinguish between the geographic entity (Chongqing) and the organization entity (Chongqing People).",
"There are two lines of methods to leverage the lattice.",
"(1) One line is to design a model to be compatible with lattice input, such as lattice LSTM (Zhang and Yang, 2018) and LR-CNN (Gui et al., 2019a).",
"In lattice LSTM, an extra word cell is employed to encode the potential words, and attention mechanism is used to fuse variable-number nodes at each position, as in Figure",
"1(b).",
"LR-CNN uses CNN to encode potential words at different window sizes.",
"However, RNN and CNN are hard to model long-distance dependencies (Vaswani et al., 2017), which may be useful in NER, such as coref-erence (Stanislawek et al., 2019).",
"Due to the dynamic lattice structure, these methods cannot fully utilize the parallel computation of GPU.",
"(2) Another line is to convert lattice into graph and use a graph neural network (GNN) to encode it, such as Lexicon-based Graph Network (LGN) (Gui et al., 2019b) and Collaborative Graph Network (CGN) (Sui et al., 2019).",
"While sequential structure is still important for NER and graph is general counterpart, their gap is not negligible.",
"These methods need to use LSTM as the bottom encoder to carry the sequential inductive bias, which makes the model complicated.",
"In this paper, we propose FLAT : F lat LA ttice T ransformer for Chinese NER.",
"Transformer (Vaswani et al., 2017) adopts fully-connected self-attention to model the long-distance dependencies in a sequence.",
"To keep the position information, Transformer introduces the position representation for each token in the sequence.",
"Inspired by the idea of position representation, we design an ingenious position encoding for the lattice-structure, as shown in Figure",
"1(c).",
"In detail, we assign two positional indices for a token (character or word): head position and tail position, by which we can reconstruct a lattice from a set of tokens.",
"Thus, we can directly use Transformer to fully model the lattice input.",
"The self-attention mechanism of Transformer enables characters to directly interact with any potential word, including self-matched words.",
"To a character, its self-matched words denote words which include it.",
"For example, in Figure",
"1(a), self-matched words of (Drug) are (Renhe Pharmacy) and (Pharmacy)(Sui et al., 2019).",
"Experimental results show our model outperforms other lexicon-based methods on the performance and inference-speed.",
"Our code will be released at https://github.com/LeeSureman/Flat-Lattice-Transformer.",
"In this section, we briefly introduce the Transformer architecture.",
"Focusing on the NER task, we only discuss the Transformer encoder.",
"It is composed of self-attention and feedforward network (FFN) layers.",
"Each sublayer is followed by residual connection and layer normalization.",
"FFN is 0 1 2 3 4 5 0 2 4 -1 0 1 2 3 4 -1 1 3 -2 -1 0 1 2 3 -2 0 2 -3 -2 -1 0 1 2 -3 -1 1 -4 -3 -2 -1 0 1 -4 -2 0 -5 -4 -3 -2 -1 0 -5 -3 -1 0 1 2 3 4 5 0 2 4 -2 -1 0 1 2 3 -2 0 2 -4 -3 -2 -1 0 1 -4 -2 0 Embedding Self-Attention FFN () Linear & CRF And Qing People Chong Shop Drug Renhe Pharmacy Chongqing Pharmacy () () () 1 1 2 2 3 3 4 4 5 5 6 6 1 2 3 6 5 6 Head Tail Add & Norm Add & Norm Token Figure 2: The overall architecture of FLAT.",
"a position-wise multi-layer Perceptron with nonlinear transformation.",
"Transformer performs self-attention over the sequence by H heads of attention individually and then concatenates the result of H heads.",
"For simplicity, we ignore the head index in the following formula.",
"The result of per head is calculated as: Att( A , V ) = softmax( A ) V , (1) A ij = (cid:18) Q i K j T d head (cid:19) , (2) [ Q , K , V ] = E x [ W q , W k , W v ] , (3) where E is the token embedding lookup table or the output of last Transformer layer.",
"W q , W k , W v R d model d head are learnable parameters, and d model = H d head , d head is the dimension of each head.",
"The vanilla Transformer also uses absolute position encoding to capture the sequential information.",
"Inspired by Yan et al. (2019), we think commutativity of the vector inner dot will cause the loss of directionality in self-attention.",
"Therefore, we consider the relative position of lattice also significant for NER.",
"After getting a lattice from characters with a lexicon, we can flatten it into flat counterpart.",
"The flat-lattice can be defined as a set of spans, and a span corresponds to a token, a head and a tail, like in Figure",
"1(c).",
"The token is a character or word.",
"The head and tail denote the position index of the token's first and last characters in the original sequence, and they indicate the position of the token in the lattice.",
"For the character, its head and tail are the same.",
"There is a simple algorithm to recover flat-lattice into its original structure.",
"We can first take the token which has the same head and tail, to construct the character sequence.",
"Then we use other tokens (words) with their heads and tails to build skip-paths.",
"Since our transformation is recoverable, we assume flat-lattice can maintain the original structure of lattice.",
"The flat-lattice structure consists of spans with different lengths.",
"To encode the interactions among spans, we propose the relative position encoding of spans.",
"For two spans x i and x j in the lattice, there are three kinds of relations between them: intersection, inclusion and separation, determined by their heads and tails.",
"Instead of directly encoding these three kinds of relations, we use a dense vector to model their relations.",
"It is calculated by continuous transformation of the head and tail information.",
"Thus, we think it can not only represent the relation between two tokens, but also indicate more detailed information, such as the distance between a character and a word.",
"Let head [ i ] and tail [ i ] denote the head and tail position of span x i .",
"Four kinds of relative distances can be used to indicate the relation between x i and x j .",
"They can be calculated as: d ( hh ) ij = head [ i ] head [ j ] , (4) d ( ht ) ij = head [ i ] tail [ j ] , (5) d ( th ) ij = tail [ i ] head [ j ] , (6) d ( tt ) ij = tail [ i ] tail [ j ] , (7) where d ( hh ) ij denotes the distance between head of x i and tail of x j , and other d ( ht ) ij , d ( th ) ij , d ( tt ) ij have similar meanings.",
"The final relative position encoding of spans is a simple non-linear transformation of the four distances: R ij = ReLU( W r ( p d ( hh ) ij p d ( th ) ij p d ( ht ) ij p d ( tt ) ij )) , (8) where W r is a learnable parameter, denotes the concatenation operator, and p d is calculated as in Vaswani et al. (2017), p (2 k ) d = sin (cid:16) d/ 10000 2 k/d model (cid:17) , (9) p (2 k +1) d = cos (cid:16) d/ 10000 2 k/d model (cid:17) , (10) where d is d ( hh ) ij , d ( ht ) ij , d ( th ) ij or d ( tt ) ij and k denotes the index of dimension of position encoding.",
"Then we use a variant of self-attention (Dai et al., 2019) to leverage the relative span position encoding as follows: Ontonotes MSRA Resume Weibo Train 15740 46675 3821 1350 Char avg 36.92 45.87 32.15 54.37 Word avg 17.59 22.38 24.99 21.49 Entity avg 1.15 1.58 3.48 1.42 Table 1: Statistics of four datasets.",
"where W q , W k,R , W k,E R d model d head and u , v R d head are learnable parameters.",
"Then we replace A with A in",
"Eq.(1).",
"The following calculation is the same with vanilla Transformer.",
"After FLAT, we only take the character representation into output layer, followed by a Condiftional Random Field (CRF) (Lafferty et al., 2001).",
"Four Chinese NER datasets were used to evaluate our model, including (1) Ontonotes 4.0 (Weischedel and Consortium, 2013) (2) MSRA (Levow, 2006) (3) Resume (Zhang and Yang, 2018) (4) Weibo (Peng and Dredze, 2015; He and Sun, 2016).",
"We show statistics of these datasets in Table 1.",
"We use the same train, dev, test split as Gui et al. (2019b).",
"We take BiLSTM-CRF and TENER (Yan et al., 2019) as baseline models.",
"TENER is a Transformer using relative position encoding for NER, without external information.",
"We also compare FLAT with other lexicon-based methods.",
"The embeddings and lexicons are the same as Zhang and Yang (2018).",
"When comparing with CGN (Li et al., 2018), we use the same lexicon as CGN.",
"The way to select hyper-parameters can be found in the supplementary material.",
"In particular, we use only one layer Transformer encoder for our model.",
"As shown in Table 2, our model outperforms baseline models and other lexicon-based models on four Chinese NER datasets.",
"Our model outperforms TENER (Yan et al., 2019) by 1.72 in average F1 score.",
"For lattice LSTM, our model has an average F1 improvement of 1.51 over it.",
"When using another lexicon (Li et al., 2018), our model also outperforms CGN by 0.73 in average F1 score.",
"Maybe due to the characteristic of Transformer, the improvement of FLAT over other lexicon-based models on small datasets is not so significant like that on large datasets.",
"We think self-attention mechanism brings two advantages over lattice LSTM: 1) All characters can directly interact with its self-matched words.",
"2) Long-distance dependencies can be fully modeled.",
"Due to our model has only one layer, we can strip them by masking corresponding attention.",
"In detail, we mask attention from the character to its self-matched word and attention between tokens whose distance exceeds 10.",
"As shown in Table 2, the first mask brings a significant deterioration to FLAT while the second degrades performance slightly.",
"As a result, we think leveraging information of self-matched words is important For Chinese NER.",
"To verify the computation efficiency of our model, we compare the inference-speed of different lexicon-based models on Ontonotes.",
"The result is shown in Figure 3.",
"GNN-based models outperform lattice LSTM and LR-CNN.",
"But the RNN encoder of GNN-based models also degrades their speed.",
"Because our model has no recurrent module and can fully leverage parallel computation of GPU, it outperforms other methods in running efficiency.",
"In terms of leveraging batch-parallelism, the speedup ratio brought by batch-parallelism is 4.97 for FLAT, 2.1 for lattice LSTM, when batch size = 16.",
"Due to the simplicity of our model, it can benefit from batch-parallelism more significantly.",
"Compared with TENER, FLAT leverages lexicon resources and uses a new position encoding.",
"To probe how these two factors bring improvement.",
"We set two new metrics, 1) Span F : while the common F score used in NER considers correctness of both the span and the entity type, Span F only considers the former.",
"2) Type Acc : proportion of full-correct predictions to span-correct predictions.",
"Table 3 shows two metrics of three models on the devlopment set of Ontonotes and MSRA.",
"We can find: 1) FLAT outperforms TENER in two metrics significantly.",
"2) The improvement on Span F brought by FLAT is more significant than that on Type Acc.",
"3) Compared to FLAT, FLAT head 's deterioration on Span F is more significant than that on Type Acc.",
"These show: 1) The new position encoding helps FLAT locate entities more accurately.",
"2) The pre-trained word-level embedding Lexicon Ontonotes MSRA Resume Weibo BERT -80.14 94.95 95.53 68.20 BERT+FLAT YJ 81.82 96.09 95.86 68.55 Table 4: Comparision between BERT and BERT+FLAT.",
"We also compare FLAT equipped with BERT with common BERT+CRF tagger on four datasets, and Results are shown in Table",
"4. We find that, for large datasets like Ontonotes and MSRA, FLAT+BERT can have a significant improvement over BERT.",
"But for small datasets like Resume and Weibo, the improvement of FLAT+BERT over BERT is marginal.",
"Zhang and Yang (2018) introduced a lattice LSTM to encode all characters and potential words recognized by a lexicon in a sentence, avoiding the error propagation of segmentation while leveraging the word information.",
"Gui et al. (2019a) exploited a combination of CNN and rethinking mechanism to encode character sequence and potential words at different window sizes.",
"Both models above suffer from the low inference efficiency and are hard to model long-distance dependencies.",
"Gui et al. (2019b) and Sui et al. (2019) leveraged a lexicon and character sequence to construct graph, converting NER into a node classification task.",
"However, due to NER's strong alignment of label and input, their model needs an RNN module for encoding.",
"The main difference between our model and models above is that they modify the model structure according to the lattice, while we use a well-designed position encoding to indicate the lattice structure.",
"For lattice-based Transformer, it has been used in speech translation and Chinese-source translation.",
"The main difference between them is the way to 1 https://github.com/fastnlp/fastNLP indicate lattice structure.",
"In Chinese-source translation, Xiao et al. (2019) take the absolute position of nodes' first characters and the relation between each pair of nodes as the structure information.",
"In speech translation, Sperber et al. (2019) used the longest distance to the start node to indicate lattice structure, and Zhang et al. (2019) used the shortest distance between two nodes.",
"Our span position encoding is more natural, and can be mapped to all the three ways, but not vise versa.",
"Because NER is more sensitive to position information than translation, our model is more suitable for NER.",
"Recently, Porous Lattice Transformer (Mengge et al., 2019) is proposed for Chinese NER.",
"The main difference between FLAT and Porus Lattice Transformer is the way of representing position information.",
"We use head' and tail' to represent the token's position in the lattice.",
"They use head', tokens' relative relation (not distance) and an extra GRU.",
"They also use porous' technique to limit the attention distribution.",
"In their model, the position information is not recoverable because head' and relative relation can cause position information loss.",
"Briefly, relative distance carries more information than relative relation.",
"In this paper, we introduce a flat-lattice Transformer to incorporate lexicon information for Chinese NER.",
"The core of our model is converting lattice structure into a set of spans and introducing the specific position encoding.",
"Experimental results show our model outperforms other lexicon-based models in the performance and efficiency.",
"We leave adjusting our model to different kinds of lattice or graph as our future work.",
"We thank anonymous reviewers for their responsible attitude and helpful comments.",
"We thank Tianxiang Sun, Yunfan Shao and Lei Li for their help, such as drawing skill sharing, pre-reviewing, etc.",
"This work is supported by the National Key Research and Development Program of China (No. 2018YFC0831103), National Natural Science Foundation of China (No. U1936214 and 61672162), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and ZJLab."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"method",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"We take up the scientific question of what determines the preferred order of adjectives in English, in phrases such as big blue box where multiple adjectives modify a following noun.",
"We implement and test four quantitative theories, all of which are theoretically motivated in terms of efficiency in human language production and comprehension.",
"The four theories we test are subjectivity (Scon-tras et al., 2017), information locality (Futrell, 2019), integration cost (Dyer, 2017), and information gain, which we introduce.",
"We evaluate theories based on their ability to predict orders of unseen adjectives in hand-parsed and automatically-parsed dependency treebanks.",
"We find that subjectivity, information locality, and information gain are all strong predictors, with some evidence for a two-factor account, where subjectivity and information gain reflect a factor involving semantics, and information locality reflects collocational preferences.",
"Across languages, there exist strong and stable constraints on the order of adjectives when more than one is used to modify a noun (Dixon, 1982; Sproat and Shih, 1991).",
"For example, in English, big blue box sounds natural and appears relatively frequently in corpora, while blue big box sounds less natural and occurs less frequently (Scontras et al., 2017).",
"In this paper, we take up the scientific question of what explains these constraints in natural language.",
"To do so, we implement quantitative models that have been proposed in previous literature as explanations for these constraints, and compare their accuracy in predicting adjective ordering data in parsed corpora of English 1 .",
"In the last few years, adjective order has become a crucial testing ground for quantitative theories 1 All code and data are available at https://github.",
"of syntax.",
"These theories provide mathematical models that can describe the distribution of words in sentences and the way those words combine to yield the meaning of a sentence, in a way that captures the fine-grained quantitative patterns observable in large text datasets (Manning, 2003; Bres-nan et al., 2007; Chen and Ferrer-i-Cancho, 2019).",
"Quantitative syntactic theories are often efficiency-based , meaning that they model word distributions as the result of a process that tries to maximize information transfer while minimizing some measure of cognitive cost; as a result, they often use the mathematical language of information theory.",
"Such theories promise not only to describe distributions of words, but also to explain why they take the shape they do, by viewing human language as an efficient code subject to appropriate constraints.",
"This work informs NLP by providing a theory of language structure that integrates with data-driven, optimization-based machine learning models.",
"Adjective order is a fruitful empirical target for quantitative theories of syntax because it is an area where the traditional discrete and symbolic theories become highly complex, and a quantitative approach becomes more attractive.",
"For example, in the formal syntax literature, a standard explanation for adjective order constraints is that each adjective belongs to a certain semantic class (e.g., COLOR or SIZE ) and that there exists a universal total order on these semantic classes (e.g., COLOR < SIZE ) shared among all languages, which determines the order of adjectives in any given instance (Cinque, 1994; Scott, 2002).",
"Such discrete theories of adjective order become complex rapidly as the number of semantic classes to be posited becomes large (upwards of twelve in Scontras et al. 2017) and more fine-grained (see Bar-Sever et al. 2018 for discussion of the learning problem posed by such classifications).",
"In contrast, quantitative syntax theories typically identify a single construct that grounds out in real-valued numerical scores given to adjectives, which determine their ordering preferences.",
"These scores can be estimated based on large-scale corpus data or based on human ratings.",
"In what follows, we test the predictions of four such theories: the subjectivity hypothesis (Scontras et al., 2017; Simonic, 2018; Hahn et al., 2018; Franke et al., 2019; Scontras et al., 2019), the information locality hypothesis (Futrell and Levy, 2017; Futrell et al., 2017; Hahn et al., 2018; Futrell, 2019), the integration cost hypothesis (Dyer, 2017), and the information gain hypothesis, which we introduce.",
"We begin with a presentation of the details of each theory, then implement the theories and test their predictions against large-scale naturalistic data from English.",
"In addition to comparing the predictors in terms of accuracy, we also perform a number of analyses to determine the important similarities and differences among their predictions.",
"The paper concludes with a discussion of what our results tell us about adjective order and related issues, and a look towards future work.",
"Scontras et al. (2017) show that adjective order is strongly predicted by adjectives' subjectivity scores : an average rating obtained by asking human participants to rate adjectives on a numerical scale for how subjective they are.",
"Adjectives that are rated as more subjective typically appear farther from the noun than adjectives rated as less subjective, and the strength of ordering preferences tracks the subjectivity differential between two adjectives.",
"For example, in big blue box , the adjective big has a subjectivity rating of 0.64 (out of 1), and the adjective blue has a subjectivity rating of 0.30.",
"If adjectives are placed in order of decreasing subjectivity, then big must appear before blue , corresponding to the preferred order.",
"The notion of subjectivity as a predictor of adjective order was previously introduced by Hetzron (1978).",
"Subsequent work has attempted to explain the role of subjectivity in adjective ordering by appealing to the communicative benefit afforded by ordering adjectives with respect to decreasing subjectivity.",
"For example, Franke et al. (2019) use simulated reference games to demonstrate that, given a set of independently-motivated assumptions concerning the composition of meaning in multi-adjective strings, subjectivity-based orderings lead to a greater probability of successful reference resolution; the authors thus offer an evolutionary explanation for the role of subjectivity in adjective ordering (see also Simonic, 2018; Hahn et al., 2018; Scontras et al., 2019).",
"The theory of information locality holds that words that have high mutual information are under pressure to be close to each other in linear order (Futrell and Levy, 2017; Futrell et al., 2017).",
"Information locality is a generalization of the well-supported principle of dependency length minimization (Liu et al., 2017; Temperley and Gildea, 2018).",
"In the case of adjective ordering, the prediction is simply that adjectives that have high pointwise mutual information (PMI) with their head noun will tend to be closer to that noun.",
"The PMI of an adjective a and a noun n is (Fano, 1961; Church and Hanks, 1990): PMI ( a : n ) log p ( a, n ) p ( a ) p ( n ) .",
"In this paper, we take the relevant joint distribution p ( a, n ) to be the distribution of adjectives and nouns in a dependency relationship, with the marginals calculated as p ( a ) = P n p ( a, n ) and p ( n ) = a p ( a, n ) .",
"Information locality is motivated as a consequence of a more general theory of efficiency in human language.",
"In this theory, languages should maximize information transfer while minimizing cognitive information-processing costs associated with language production and comprehension.",
"Information locality emerges from these theories when we assume that the relevant measure of information-processing cost is the surprisal of words given lossy memory representations (Hale, 2001; Levy, 2008; Smith and Levy, 2013; Futrell and Levy, 2017; Futrell, 2019).",
"The theory of integration cost is also based in the idea of efficiency with regard to information-processing costs.",
"It differs from information locality in that it assumes that the correct metric of processing difficulty for a word w is the entropy over the possible heads of w : Cost ( w ) H [ T | w ] = X t p T ( t | w ) log p T ( t | w ) , (2) where T is a random variable indicating the head t of the word w (Dyer, 2017).",
"This notion of cost captures the amount of uncertainty that has to be resolved about the proper role of the word w with respect to the rest of the words in the sentence.",
"Like information locality, the theory of integration cost recovers dependency length minimization as a special case.",
"For the case of predicting adjective order, the prediction is that an adjective a will be closer to a noun when it has lower integration cost: IC ( a ) = H [ N | a ] , (3) where N is a random variable ranging over nouns.",
"Integration cost corresponds to an intuitive idea previously articulated in the adjective ordering literature.",
"The idea is that adjectives that can modify a smaller set of nouns appear closer to the noun: for example, an order such as big wooden spoon is preferred over wooden big spoon because the word big can modify nearly any noun, while wooden can only plausibly modify a small set of nouns (Ziff, 1960).",
"The connection between integration cost and set size comes from the information-theoretic notion of the typical set (Cover and Thomas, 2006, pp. 5771); the entropy of a random variable can be interpreted as the (log) cardinality of the typical set of samples from that variable.",
"When we order adjectives by integration cost, this is equivalent to ordering them such that adjectives that can modify a larger typical set of nouns appear farther from the noun.",
"The result is that each adjective gradually reduces the entropy of the possible nouns to follow, thus avoiding information-processing costs that may be associated with entropy reduction (Hale, 2006, 2016; Dye et al., 2018).",
"We propose a new efficiency-based predictor of adjective order: information gain.",
"The idea is to view the noun phrase, consisting of prenominal adjectives followed by the noun, as a decision tree for identifying a referent, where each word partitions the space of possible referents.",
"Each partitioning is associated with some information gain, indicating how much the set of possible referents shrinks.",
"In line with the logic for integration cost, we propose that the word with smaller information gain will be placed earlier, so that the set of referents is gradually narrowed by each word.",
"As generally implemented in decision trees, information gain refers to the reduction of entropy obtained from partitioning a set on a feature (Quinlan, 1986).",
"In our case, the distribution of nouns N is partitioned on a given adjective a , creating two partitions: N a and its complement N ac .",
"The difference between the starting entropy H [ N ] and the sum of the entropy of each partition, conditioned on the size of that partition, is the information gain of a : IG ( a ) = H [ N ] (cid:20) | N a | | N | H [ N a ] + | N ac | | N | H [ N ac ] (cid:21) .",
"Information gain is therefore comprised of both positive and negative evidence.",
"That is, specifying an adjective such as big partitions the probability distribution of nouns into N big , the subset of N which takes big as a dependent, and N bigC , the subset of N which does not.",
"Crucially, H [ N a ] is not H [ N | a ] in general.",
"H [ N | a ] is the conditional entropy of nouns given a specific adjective, while H [ N a ] is the entropy of a distribution over nouns whose support is limited to noun types that have been observed to occur with an adjective a.",
"Combined with H [ N ac ] , information gain tells us how much the entropy of N is reduced by partitioning on a .",
"This means that information gain and integration cost, while conceptually similar, are not mathematically equivalent.",
"To our knowledge, information gain has not been previously suggested as a predictor of adjective ordering, although Danks and Glucksberg (1971) expressed a similar intuition in proposing that adjectives are ordered according to their dis-criminative potential'.",
"Although decision-tree algorithms such as ID3 choose the highest-IG feature first, we predict that the lower-information-gain adjective will precede the higher one.",
"Previous corpus studies of adjective order include Malouf (2000), who examined methods for ordering adjectives in a natural language generation context, and Wulff (2003), who examined effects of phonological length, syntactic category ambiguity, semantic closeness, adjective frequency, and",
"a measure similar to PMI called noun specificity.",
"Our work differs from this previous work by focusing on recently-introduced predictors that have theoretical motivations grounded in efficiency and information theory.",
"The theories we test here (except information gain) have been tested in previous corpus studies, but never compared against each other.",
"Scontras et al. (2017) validate that subjectivity is a good predictor of adjective order in corpora, and Hahn et al. (2018) and Futrell et al. (2019) evaluate both information locality and subjectivity.",
"Dyer (2018) uses integration cost to model the order of same-side sibling dependents cross-linguistically and across all syntactic categories.",
"Our task is to find predictors of adjective order based solely on data about individual adjectives and nouns.",
"More formally, the goal is to find a scoring function S ( A, N ) applying to an adjective A and a noun N , such that the order of two adjectives modifying a noun A 1 A 2 N can be predicted accurately by comparing S ( A 1 , N ) and S ( A 2 , N ) .",
"Furthermore, the scoring function S should not include information about relative order in observed sequences of the form A 1 A 2 N the scoring function should be based only on corpus data about co-occurrences of A and N , or on human ratings about A and/or N .",
"We apply this restriction because our goal is to evaluate scientific theories of why adjectives are ordered the way they are, rather than to achieve maximal raw accuracy.",
"Corpus-based predictors We estimate information-theoretic quantities for adjectives using a large automatically-parsed subsection of the English Common Crawl corpus (Buck et al., 2014; Futrell et al., 2019).",
"The use of a parsed corpus is necessary to identify adjectives that are dependents of nouns in order to calculate PMI and IC.",
"As described in Futrell et al. (2019), this corpus was produced by heuristically filtering Common Crawl to contain only full sentences and to remove web boilerplate text, and then parsing the resulting text using SyntaxNet (Andor et al., 2016), obtaining a total of 1 billion tokens of automatically parsed web text.",
"In this work, we use a subset of this corpus, described below.",
"From this corpus, we extract two forms of data.",
"First, we extract adjectivenoun (AN) pairs : a set of pairs h A, N i where A is an adjective and N is a noun and N is the head of A with dependency type amod .",
"As in Futrell (2019), we define A as an adjective iff its part-of-speech is JJ and its wordform is listed as an adjective in the English CELEX database (Baayen et al., 1995).",
"We define N as a noun iff its part-of-speech is NN or NNS and its wordform is listed as a noun in CELEX.",
"These AN pairs are used to estimate the information-theoretic predictors that we are interested in.",
"We extracted 33,210,207 adjectivenoun pairs from the parsed Common Crawl corpus.",
"Second, we extract adjectiveadjectivenoun (AAN) triples : a set of triples h A 1 , A 2 , N i where A 1 and A 2 are adjectives as defined above, and A 1 and A 2 are both adjective dependents with relation type amod of a single noun head N .",
"Furthermore, A 1 and A 2 must not have any further dependents, and they must appear in the order A 1 A 2 N in the corpus with no intervening words.",
"We extracted a total of 842,714 AAN triples from the parsed Common Crawl corpus.",
"The values of all corpus-based predictors are estimated using the AN pairs.",
"The AAN triples are used only for fitting regressions from the predictors to adjective orders, and for evaluation.",
"Ratings-based predictors We gathered subjectivity ratings for all 398 adjectives present in AAN triples in the English UD corpus.",
"These subjectivity ratings were collected over Amazon.com's Mechanical Turk, using the methodology of Scontras et al. (2017).",
"264 English-speaking participants indicated the subjectivity of 30 random adjectives by adjusting a slider between endpoints labeled completely objective (coded as 0) and completely subjective (coded as 1).",
"Each adjective received an average of 20 ratings.",
"Test set As a held-out test set for our predictors, we use the English Web Treebank (EWT), a hand-parsed corpus, as contained in Universal Dependencies (UD) v2.4 (Silveira et al., 2014; Nivre, 2015).",
"Following our criteria, we extract 155 AAN triples having scores for all our predictors.",
"Because this test set is very small, we also evaluate against a held-out portion of the parsed Common Crawl data.",
"In the Common Crawl test set, after including only AAN triples that have scores for all of our predictors, we have 41,822 AAN triples.",
"Our information-theoretic predictors require estimates of probability distributions over adjectives and nouns.",
"To estimate these probability distributions, we first use maximum likelihood estimation as applied to counts of wordforms in AN pairs.",
"We call these estimates wordform estimates .",
"Although maximum likelihood estimation is sufficient to give an estimate of the general entropy of words (Bentz et al., 2017), it is not yet clear that it gives a good measure for conditional entropy or mutual information, due to data sparsity, even with millions of tokens of text (Futrell et al., 2019).",
"Therefore, as a second method that alleviates the data sparsity issue, we also calculate our probability distributions not over raw wordforms but over clusterings of words in an embedding space, a method which showed promise in Futrell et al. (2019).",
"To derive word clusters, we use sklearn.cluster.KMeans applied to a pre-trained set of 1.9 million 300-dimension GloVe vectors 2 generated from the Common Crawl corpus (Pennington et al., 2014).",
"We classify adjectives into k A = 300 clusters and nouns into k N = 1000 clusters.",
"These numbers k were found by choosing the largest k multiple of 100 that did not result in any singleton clusters.",
"We then estimated probabilities p ( a, n ) by maximum likelihood estimation after replacing wordforms a and n with their cluster indices.",
"This clustering method alleviates data sparsity by reducing the size of the support of the distributions over adjectives and nouns, to k A and k N respectively, and by effectively spreading probability mass among words with similar semantics.",
"The clusters might also end up recapitulating the semantic categories that have played a role in more traditional syntactic theories of adjective order (Dixon, 1982; Cinque, 1994; Scott, 2002).",
"We call these estimates cluster estimates .",
"Fitting predictors to data Most of our individual predictors come along with theories that say what their effect on adjective order should be.",
"Adjectives with low PMI should be farther from the noun, adjectives with high IC should be farther from the noun, and adjectives with high subjectivity should be farther from the noun.",
"Therefore, 2 http://nlp.stanford.edu/data/glove.",
"strictly speaking, it is not necessary to fit these predictors to any training data: we can evaluate our theories based on their a priori predictions simply by asking how accurately we can predict the order of adjectives in AAN triples based on the rules above.",
"However, we can get a deeper picture of the performance of our predictors by using them in classifiers for adjective order.",
"By fitting classifiers using our predictors, we can easily extend our models to ones with multiple predictors, in order to determine if a combined set of the predictors gives increased accuracy over any one.",
"Logistic regression method We fit logistic regressions to predict adjective order in AAN triples using our predictors.",
"Our goal is to predict the order of the triple from the unordered set of the two adjectives { A 1 , A 2 } and the noun N .",
"To do so, we consider the adjectives in lexicographic order: Given an AAN triple, let A 1 denote the lexicographically-first adjective, and A 2 the second.",
"Then any given AAN triple is either of the form h A 1 , A 2 , N i or h A 2 , A 1 , N i .",
"We fit a logistic regression to predict this order given the difference in the values of the predictors for the two adjectives.",
"That is, we fit a logistic regression of the form in Figure",
"1. This method of fitting a classifier to predict order data was used previously in Morgan and Levy (2016).",
"Based on theoretical considerations and previous empirical results, we expect that the fitted values of 1 will be negative for PMI and positive for IC and subjectivity.",
"The regression in Figure 1 can easily be extended to include multiple predictors, with a separate for each.",
"Evaluation metrics We evaluate our models using raw accuracy in predicting the order of held-out AAN triples.",
"We also calculate 95% confidence intervals on these accuracies, indicating our uncertainty about how the accuracy would change in repeated experiments.",
"Following standard experimental practice, if we find that two predictors achieve different accuracies, but their confidence intervals overlap, then we conclude that we do not have evidence that their accuracies are reliably different.",
"We say a difference in accuracy between predictors is significant if the 95% confidence intervals do not overlap.",
"Evaluation on held-out hand-parsed data It is crucial that we not evaluate solely on automatically-parsed data.",
"The reason is that both log p ( h A 1 , A 2 , N i ) p ( h A 2 , A 1 , N i ) = 0 + 1 ( S ( A 1 , N ) S ( A 2 , N )) + Figure 1: Logistic regression for adjective order.",
"PMI and IC, as measures of the strength of statistical association between nouns and adjectives, could conceivably double as predictors of parsing accuracy for automatic dependency parsers.",
"If that is the case, then we might observe that AAN triples with low PMI or high IC are rare in automatically parsed data.",
"However, this would not be a consequence of any interesting theory of cognitive cost, but rather simply an artifact of the automatic parser used.",
"To avoid this confound, we include an evaluation based on held-out hand-parsed data in the form of the English Web Treebank.",
"Table 1a shows the accuracies of our predictors in predicting held-out adjective orders in the Common Crawl test set, visualized in Figure 2a.",
"We find that the pattern of results depends on whether predictors are estimated based on wordforms or based on distributional clusters.",
"When estimating based on wordforms, we find that subjectivity and PMI have the best accuracy.",
"When estimating based on clusters, the accuracy of PMI drops, and the best predictor is subjectivity, with IG close behind.",
"We find a negative logistic regression weight for information gain, indicating that the adjective with lower information gain is placed first.",
"This basic pattern of results is confirmed in the hand-parsed EWT data.",
"Accuracies of predictors on the EWT test set are shown in Table 1b and visualized in Figure 2b.",
"When estimating based on wordforms, the best predictors are subjectivity and PMI, although the confidence intervals of all predictors are overlapping.",
"When estimating based on clusters, IG has the best performance, and PMI again drops in accuracy.",
"For this case, IG, IC, and subjectivity all have overlapping confidence intervals, so we conclude that there is no evidence that one is better than the other.",
"However, we do have evidence that IG and IC are more accurate than PMI when estimated based on clusters.",
"Adjective order may be determined by multiple separate factors operating in parallel.",
"In order to investigate whether our predictors might be making independent contributions to explaining adjective order, we fit logistic regressions containing multiple predictors.",
"If the best accuracy comes from a model with two or more predictors, then this would be evidence that these two predictors are picking up on separate sources of information relevant for predicting adjective order.",
"We conducted logistic regressions using all sets of two of our predictors.",
"The top 5 such models, in terms of Common Crawl test set accuracy, are shown in Table",
"2. The best two are clus-ter/wordform subjectivity and wordform PMI, followed by cluster subjectivity and cluster information gain.",
"No set of three predictors achieves sig-nificantly higher accuracy than the best predictors shown in Table",
"2. 5.2 Qualitative analysis We manually examined cases where each model made correct and incorrect predictions in the hand-parsed EWT data.",
"Table 3a shows example AAN triples that were ordered correctly by PMI, but not by subjectivity.",
"These are typically cases where a certain adjectivenoun pair forms a common collocation whose meaning is in some cases even noncompositional; for example, bad behaviors is a common collocation when describing training animals, and ulterior motives and logical fallacy are likewise common English collocations.",
"In contrast, when subjectivity makes the right prediction and PMI makes the wrong prediction, these are often cases where a word pair which normally would form a collocation is broken up by another adjective, such as dear sick friend, where dear friend is a common collocation.",
"We also performed a manual qualitative analysis to determine the contribution of information gain beyond subjectivity and PMI.",
"Table 3b shows examples of such cases from the EWT.",
"Many of these seem to be cases with weak preferences, where both the attested order and the the flipped order are acceptable (e.g., tiny little kitten).",
"intervals overlap, other than cluster-based PMI and IG.",
"Ordered correctly by wordform PMI, but not by wordform subjectivity.",
"(b) Ordered correctly by cluster-based information gain, but not by cluster-based subjectivity nor PMI.",
"Our results broadly support the following interpretation.",
"Adjective ordering preferences are largely determined by a semantic factor that can be quan-tified variously using wordform subjectivity or distributional-cluster-based estimates of information gain.",
"In addition to this factor, another factor is in play: when an adjectivenoun pair forms a collocation with a possibly non-compositional meaning, then the adjective in this pair will tend to be placed next to the noun.",
"This latter factor is measured by PMI.",
"This interpretation matches that of Hahn et al. (2018), who found separate contributions from PMI and a model-based operational-ization of subjectivity.",
"Our interpretation is supported by the following points from the analysis above.",
"First, among predictors based solely on wordforms, the best accuracy is obtained by a combination of subjectivity and PMI.",
"Second, when we turn to estimates based on clusters, two things happen: the accuracy of PMI drops, and the accuracy of information gain increases while the accuracy of subjectivity stays about the same.",
"This pattern of results suggests that PMI is measuring a factor that has more to do with specific wordforms, while IG and subjectivity are measuring a factor that has more to do with semantic uncertainty about the noun or about the relationship between the adjective and the noun.",
"We examined a number of theoretically-motivated predictors of adjective order in dependency treebank corpora of English.",
"We found that the predictors have comparable accuracy, but that it is possible to identify two broad factors: a semantic factor variously captured by subjectivity scores and information gain based on word clusters, and a wordform-based factor captured by PMI.",
"This study provides a framework for evaluating further theories of adjective order, and for evaluating the theories given here against new data from dependency treebanks.",
"Generalizing to larger datasets of English is straightforward.",
"More excitingly, we now have the opportunity to bring new languages into the fold.",
"The vast majority of research on adjective ordering, and all the corpus work to our knowledge, has been done on English, where adjectives almost always come before the noun.",
"Studying other typologically-distinct languages provides an opportunity to disentangle the theories that we studied here in a way that cannot be done in English.",
"The available behavioral evidence suggests that mirror-image preferences (e.g., box blue big) may be the norm in post-nominal adjective languages (Martin, 1969; Scontras et al., 2020).",
"Information locality, subjectivity, and integration cost make precisely that prediction, though none addresses mixed-type languages in which adjectives can precede or follow nouns.",
"It is an open question how to implement IG for these post-or mixed-placement adjectives; one possibility is to measure the information gained when the set of adjectives associated to a noun A n is partitioned by an adjective a .",
"In that case, the predictions about post-nominal order could differ substantially from the predictions about pre-nominal order.",
"Our dependency-treebank-based methods can be applied to any other corpus of any language, provided it has enough data in the form of adjectivenoun pairs to get reliable estimates of the information-theoretic predictors.",
"Such studies will be crucial to achieve a complete computational understanding of natural language syntax."
] | [
"method",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"method",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain"
] |
[
"Automated metaphor detection is a challenging task to identify the metaphorical expression of words in a sentence.",
"To tackle this problem, we adopt pre-trained contextualized models, e.g. , BERT and RoBERTa.",
"To this end, we propose a novel metaphor detection model, namely metaphor-aware late interaction over BERT (MelBERT) .",
"Our model not only leverages contextualized word representation but also benefits from linguistic metaphor identification theories to detect whether the target word is metaphorical.",
"Our empirical results demonstrate that MelBERT outperforms several strong baselines on four benchmark datasets, i.e. , VUA-18, VUA-20, MOH-X, and TroFi.",
"As the conceptual and cognitive mapping of words, a metaphor is a common language expression representing other concepts rather than taking literal meanings of words in context (Lakoff and Johnson, 1980; Lagerwerf and Meijers, 2008).",
"For instance, in the sentence hope is on the horizon, the word horizon does not literally mean the line at the earth's surface.",
"It is a metaphorical expression to describe a positive situation.",
"Therefore, the meaning of horizon is context-specific and different from its literal definition.",
"As the metaphor plays a key role in cognitive and communicative functions, it is essential to understand contextualized and unusual meanings of words ( e.g. , metaphor, metonymy, and personifica-tion) in various natural language processing (NLP) tasks, e.g. , machine translation (Shi et al., 2014), sentiment analysis (Cambria et al., 2017), and dialogue systems (Dybala and Sayama, 2012).",
"A lot of existing studies have developed various computational models to recognize metaphorical words in a sentence.",
"Automated metaphor detection aims at identifying metaphorical expressions using computational models.",
"Existing studies can be categorized into three pillars.",
"First, feature-based models employ various hand-crafted features (Shutova et al., 2010; Turney et al., 2011; Shutova and Sun, 2013; Broadwell et al., 2013; Tsvetkov et al., 2014; Bulat et al., 2017).",
"Although simple and intuitive, they are highly sensitive to the quality of a corpus.",
"Second, some studies (Wu et al., 2018; Gao et al., 2018; Mao et al., 2019) utilize recurrent neural networks (RNNs), which are suitable for analyzing the sequential structure of words.",
"However, they are limited to understanding the diverse meanings of words in context.",
"Lastly, the pre-trained contextualized models, e.g. , BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), have been used for detecting metaphors (Chen et al., 2020; Gong et al., 2020; Su et al., 2020).",
"Owing to the powerful representation capacity, such models have been successful for addressing various NLP tasks (Wang et al., 2019) and document ranking in IR (Mitra and Craswell, 2018).",
"Based on such an advancement, we utilize a contextualized model using two metaphor identification theories, i.e. , Metaphor Identification Procedure (MIP) (Pragglejaz Group, 2007; Steen et al., 2010) and Selectional Preference Violation (SPV) (Wilks, 1975, 1978).",
"For MIP, a metaphorical word is recognized if the literal meaning of a word is different from its contextual meaning (Haagsma and Bjerva, 2016).",
"For instance, in the sentence Don't twist my words, the contextual meaning of twist is to distort the intended meaning, different from its literal meaning, to form into a bent, curling, or distorted shape.",
"For SPV, a metaphorical word is identified if the target word is unusual in the context of its surrounding words.",
"That is, twist is metaphorical because it is unusual in the context of words.",
"Although the key ideas of the two strategies are similar, they have different procedures for detecting metaphorical words and their contexts in the sentence.",
"To this end, we propose a novel metaphor detection model using metaphorical identification theories over the pre-trained contextualized model, namely metaphor-aware late interaction over BERT ( MelBERT ).",
"MelBERT deals with a classification task to identify whether a target word in a sentence is metaphorical or not.",
"As depicted in Figure 2, MelBERT is based on a siamese architecture that takes two sentences as input.",
"The first sentence is a sentence S with a target word w t and the second sentence is a target word w t itself.",
"MelBERT independently encodes S and w t into each embedding vector, which avoids unnecessary interactions between S and w t .",
"Inspired by MIP, MelBERT then employs the contextualized and isolated representations of w t to distinguish between the contextual and literal meaning of w t .",
"To utilize SPV, MelBERT employs the sentence embedding vector and the contextualized target word embedding vector.",
"MelBERT identifies how much the surrounding words mismatch from the target word.",
"Lastly, MelBERT combines two metaphor identification strategies to predict if a target word is metaphorical or not.",
"Each metaphor identification theory is non-trivial for capturing complicated and vague metaphorical words.",
"To overcome these limitations, we incorporate two linguistic theories into a pre-trained contextualized model and utilize several linguistic features such as POS features.",
"To summarize, MelBERT has two key advantages.",
"First, MelBERT effectively employs the contextualized representation to understand various aspects of words in context.",
"Because MelBERT is particularly based on a late interaction over contextualized models, it can prevent unnecessary interactions between two inputs and effectively distinguish the contextualized meaning and the isolated meaning of a word.",
"Second, MelBERT utilizes two metaphor identification theories to detect whether the target word is metaphorical.",
"Experimental results show that MelBERT consistently outperforms state-of-the-art metaphor detection models in terms of F1-score on several benchmark datasets, such as VUA-18, VUA-20, and VUA-Verb datasets.",
"Feature-based approach .",
"Various linguistic features are used to understand metaphorical expressions.",
"Representative hand-engineered features include word abstractness and concreteness (Tur-ney et al., 2011), word imageability (Broadwell et al., 2013), semantic supersenses (Tsvetkov et al., 2014), and property norms (Bulat et al., 2017).",
"However, they have difficulties handling rare usages of metaphors because the features rely on manually annotated resources.",
"To address this problem, sparse distributional features (Shutova et al., 2010; Shutova and Sun, 2013) and dense word embeddings (Shutova et al., 2016; Rei et al., 2017), i.e. , Word2Vec (Mikolov et al., 2013), are used as better linguistic features.",
"For details, refer to the survey (Veale et al., 2016).",
"RNN-based approach .",
"Several studies proposed neural metaphor detection models using recurrent neural networks (RNNs).",
"(Wu et al., 2018) adopts a bidirectional-LSTM (BiLSTM) (Graves and Schmidhuber, 2005) and a convolutional neural network (CNN) using Word2Vec (Mikolov et al., 2013) as text features in addition to part-of-speech (POS) and word clustering information as linguistic features.",
"(Gao et al., 2018) employs BiLSTM as an encoder using GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018) as text input representation.",
"(Mao et al., 2019) makes use of the metaphor identification theory on top of the architecture of (Gao et al., 2018).",
"Despite their success, the shallow neural networks ( e.g. , BiLSTM and CNN) have limitations on representing various aspects of words in context.",
"Contextualization-based approach .",
"Recent studies utilize pre-trained contextualized language models, e.g. , BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), for metaphor detection.",
"Because the pre-trained model can encode rich semantic and contextual information, it is useful for detecting metaphors with fine-tuning training.",
"DeepMet (Su et al., 2020) utilizes RoBERTa with various linguistic features, i.e. , global text context, local text context, and POS features.",
"IlliniMet (Gong et al., 2020) combines RoBERTa with linguistic information obtained from external resources.",
"(Chen et al., 2020) formulates the multitask learning problem for both metaphor detection, and (Leong et al., 2020) reports the results of these models in the VUA 2020 shared task.",
"The key idea of neural semantic matching is that neural models encode a query-document pair into two embedding vectors and compute a relevance score between the query and the document (Mi-tra",
"(Mi-tra and Craswell, 2018).",
"The simple approach is to feed a query-document pair to BERT (De-vlin et al., 2019) and compute a relevance score, where the query and the document are fully interacted (Nogueira et al., 2019; Dai and Callan, 2020).",
"In contrast, SBERT (Reimers and Gurevych, 2019), TwinBERT (Lu et al., 2020), and ColBERT (Khat-tab and Zaharia, 2020) adopt late interaction architectures using siamese BERT, where the query and the document are encoded independently.",
"Our work is based on the late interaction architecture.",
"In other words, the sentence with the target word and the target word is encoded separately to represent contextualized and isolated meanings of the target word.",
"In this section, we propose a novel metaphor detection model over a pre-trained contextualized model.",
"To design our model, we consider two metaphor detection tasks.",
"Given a sentence S = { w 1 , . . . , w n } with n words and a target word w t S , the classification task predicts the metaphoricity ( i.e. , mat-aphorical or literal) of w t .",
"Given a sentence S , the sequence labeling predicts the metaphoricity of each word w t ( 1 t n ) in S .",
"We aim at developing a metaphor detection model for the classification task.",
"Our model returns a binary output, i.e. , 1 if the target word w t in S is metaphorical or 0 otherwise.",
"By sequentially changing the target word w t , our model can be generalized to classify the metaphoricity of each word in a sentence, as in sequence labeling.",
"The pre-trained language models, e.g. , BERT (De-vlin et al., 2019) and RoBERTa (Liu et al., 2019), usually take two sentences as input and return output to predict the relevance between two input sentences.",
"We adopt RoBERTa as the contextualized backbone model because RoBERTa is known to outperform BERT (Liu et al., 2019).",
"To design a metaphor detection model, we treat one input sentence as a single word (or a phrase).",
"As depicted in Figure 1, there are two paradigms for representing the interaction between two input sentences: all-to-all interaction and late interaction , as discussed in the document ranking problem (Khattab and Zaharia, 2020).",
"While all-to-all interaction takes two input sentences together as an input, late interaction encodes two sentences",
"separately over a siamese architecture.",
"Given a sentence S and a target word w t , all-to-all interaction can capture all possible interactions within and across w t and S , which incurs high computational cost.",
"Moreover, when some interactions across w t and S are useless, it may learn noisy information.",
"In contrast, because late interaction encodes w t and S independently, it naturally avoids unnecessary intervention across w t and S .",
"The sentence embedding vector also can be easily reused in computing the interaction with the target word.",
"In other words, the cost of encoding the sentence vector can be amortized for that of encoding different target words.",
"Because our goal is to identify whether the contextualized meaning of the target word w t is different from its isolated meaning, we adopt the late interaction paradigm for metaphor detection.",
"Our model encodes a sentence S with a target word and a target word w t into embedding vectors, respectively, and computes the metaphoricity score of the target word.",
"(In Section 4, it is found that our model using late interaction outperforms a baseline model using all-to-all interaction.) 3.2 Model Architecture We propose a novel metaphor detection model, namely, metaphor-aware late interaction over BERT ( MelBERT ) using metaphor identification theories, i.e. , Metaphor Identification Procedure (MIP) (Pragglejaz Group, 2007; Steen et al., 2010) and Selectional Preference Violation (SPV) (Wilks, 1975, 1978).",
"Figure 2 illustrates the overall architecture of MelBERT, which consists of three components: a sentence encoder Enc ( S ) , a target word encoder Enc ( w t ) , and a late interaction mechanism to compute a score.",
"We first explain the input layer for two encoders Enc ( S ) and Enc ( w t ) .",
"Each word in the sentence is converted to tokens using an improved implementation of byte-pair encoding (BPE) (Radford Transformer encoder Transformer encoder Concatenation Token embedding Segment embedding Position embedding [CLS] And finally, the debate sharpen ##ed [SEP] VERB [SEP] 0 1 2 3 4 6 7 8 9 LOC LOC TAR TAR POS [CLS] sharpen ##ed [SEP] 0 1 2 3 Linear + Softmax 5 v S v S, 5 v t SPV layer MIP layer Figure 2: Model architecture of MelBERT. When a target word w t is split into multiple tokens by BPE, the average pooling is used for the target word. et al., 2019).",
"As shown in the original BERT, the position embedding is used to represent the position of tokens.",
"The segment embedding is used to distinguish target tokens (denoted as [TAR]) and their local context (denoted as [LOC]).",
"When the sentence is represented as a composite sentence, the local context indicates a clause including target tokens.",
"For simplicity, we represent the local context using comma separator (,) in the sentence.",
"Besides, we add a special classification token [CLS] before the first token and a segment separation token [SEP] after the last token.",
"To make use of the POS feature of the target word, we append the POS tag for the target word after [SEP], as used in (Su et al., 2020).",
"The input representation is finally computed by the element-wise addition of token, position embedding, and segment embedding.",
"For Enc ( w t ) , the target word is converted to the tokens using BPE, but position and segment embedding are not used.",
"Given a sentence S = { w 1 , . . . , w n } , Enc ( S ) encodes each word into a set of contextualized embedding vectors, { v S , v S, 1 , . . . , v S,n } using the transformer encoder (Vaswani et al., 2017), where v S is the embedding vector corresponding to the [CLS] token and v S,i is the i -th embedding vector for w i in S .",
"Similarly, Enc ( w t ) encodes a target word w t into v t without context.",
"other words in S .",
"Therefore, v S,t and v t can be interpreted as different meanings for w t , i.e. , v S,t is contextualized representation of w t and v t is isolated representation of w t .",
"Then, we utilize two metaphor identification theories using contextualized embedding vectors.",
"MelBERT using MIP .",
"The basic idea of MIP is that a metaphorical word is identified by the gap between the contextual and literal meaning of a word.",
"To incorporate MIP into MelBERT, we employ two embedding vectors v S,t and v t , representing a contextualized embedding vector and an isolated embedding vector for w t , respectively.",
"Using these vectors, we identify the semantic gap for the target word in context and isolation.",
"MelBERT using SPV .",
"The idea of SPV is that a metaphorical word is identified by the semantic difference from its surrounding words.",
"Unlike MIP, we only utilize the sentence encoder.",
"Given a target word w t in S , our key assumption is that v S and v S,t show a semantic gap if w t is metaphorical.",
"Although v S and v S,t are contextualized, the meanings of the two vectors are different; v S represents the interaction across all pair-wise words in S , but v S,t represents the interaction between w t and other words in S .",
"In this sense, when w t is metaphorical, v S,t can be different from v S by the surrounding words of w t .",
"Late interaction over MelBERT .",
"Using the two strategies, MelBERT predicts whether a target word w t S is metaphorical or not.",
"We can compute a hidden vector h MIP by concatenating v S,t and v t for MIP.",
"where h MIP and f ( ) is a function for the MLP layer to learn the gap between two vectors v S,t and v t .",
"We can also compute a hidden vector h SPV using v S and v S,t for SPV.",
"h SPV = g ([ v S ; v S,t ]) , (4) where h SPV R h 1 and g ( ) is a function for the MLP layer to learn the semantic difference between v S and v S,t .",
"We combine two hidden vectors h MIP and h SPV to compute a prediction score: y = ( W (cid:62) [ h MIP ; h SPV ] + b ) , (5) where ( ) is the sigmoid function, W R 2 h 1 is the parameter, and b is a bias.",
"To learn MelBERT, finally, we use the cross-entropy loss function for binary classification as follows: L = N (cid:88) i =1 y i log y i + (1 y i ) log(1 y i ) , (6) where N is the number of samples in the training set.",
"y i and y i are the true and predicted labels for the i -th sample in the training set.",
"In this section, we first present the experimental setup, then report empirical results by comparing our model against strong baselines.",
"Datasets .",
"We use four well-known public English datasets.",
"First, the VU Amsterdam Metaphor Corpus (VUA) has been released in metaphor detection shared tasks in 2018 and 2020.",
"We use two versions of VUA datasets, called VUA-18 (Leong et al., 2018) and VUA-20 (Leong et al., 2020), where VUA-20 is the extension of VUA-18.",
"Let VUA-18 tr , VUA-18 dev , VUA-18 te denote the training, validation, and test datasets, split from VUA-18.",
"VUA-20 tr includes VUA-18 tr and VUA-18 dev .",
"VUA-20 te also includes VUA-18 te , and VUA-Verb te is a subset of VUA-18 te and VUA-20 te .",
"Because most of the tokens in a sentence are literal words in VUA-18, VUA-20 selectively chooses the tokens in the training and testing datasets.",
"VUA-18 te consists of four genres, including news, academic, fiction, and conversation.",
"It can also be categorized into different POS tags, such as verb, noun, adjective, and adverb.",
"Additionally, we employ MOH-X (Mohammad et al., 2016) Dataset #tokens %M #Sent Sent len VUA-18 tr 116,622 11.2 6,323 18.4 VUA-18 dev 38,628 11.6 1,550 24.9 VUA-18 te 50,175 12.4 2,694 18.6 VUA-20 tr 160,154 12.0 12,109 15 VUA-20 te 22,196 17.9 3,698 15.5 VUA-Verb te 5,873 30 2,694 18.6 MOH-X 647 48.7 647 8 TroFi 3,737 43.5 3,737 28.3 Table 1: Detailed statistics on benchmark datasets.",
"and TroFi (Birke and Sarkar, 2006) for testing purposes only.",
"MOH-X is a verb metaphor detection dataset with the sentences from WordNet and TroFi is also a verb metaphor detection dataset, including sentences from the 1987-89 Wall Street Journal Corpus Release 1.",
"The sizes of these datasets are relatively smaller than those of VUA datasets, and they have metaphorical words of more than 40%, while VUA-18 and VUA-20 datasets have about 10% of metaphorical words.",
"While MOH-X and TroFi only annotate verbs as metaphorical words, the VUA dataset annotates all POS tags as metaphorical words.",
"In this sense, we believe that the VUA dataset is more appropriate for training and testing models.",
"Table 1 summarizes detailed statistics on the benchmark datasets.",
"Baselines .",
"We compare our models with several strong baselines, including RNN-based and contextualization-based models.",
"RNN_ELMo and RNN_BERT (Gao et al., 2018): They employ the concatenation of the pre-trained ELMo/BERT and the GloVe (Pen-nington et al., 2014) embedding vectors as an input, and use BiLSTM as a backbone model.",
"Note that they use contextualized models only for input vector representation.",
"RNN_HG and RNN_MHCA (Mao et al., 2019): They incorporate MIP and SPV into RNN_ELMo (Gao et al., 2018).",
"RNN_HG compares an input embedding vector (literal) with its hidden state (contextual) through BiLSTM.",
"RNN_MHCA utilizes multi-head attention to capture the contextual feature within the window size.",
"target word and a sentence as two input sentences and computes a prediction score.",
"It can be viewed as a metaphor detection model over an all-to-all interaction architecture.",
"RoBERTa_SEQ (Leong et al., 2020): It takes one single sentence as an input, and a target word is marked as the input embedding token and predicts the metaphoricity of the target word using the embedding vector of the target word.",
"This architecture is used as the BERT-based baseline in the VUA 2020 shared task.",
"DeepMet (Su et al., 2020): It is the winning model in the VUA 2020 shared task.",
"It also utilizes RoBERTa as a backbone model and incorporates it with various linguistic features, such as global context, local context, POS tags, and fine-grained POS tags.",
"Evaluation protocol .",
"Because the ratio of metaphorical words is relatively small, we adopt three metrics, e.g. , precision, recall, and F1-score, denoted by Prec, Rec, and F1.",
"MOH-X and TroFi datasets are too smaller than VUA datasets.",
"Thus, we only used them as the test datasets; metaphor detection models are only trained in VUA datasets, and zero-shot transfer is conducted to evaluate the effectiveness of model generalization.",
"Implementation details .",
"For four baselines, we used the same hyperparameter settings 1 in (Gao et al., 2018; Mao et al., 2019; Su et al., 2020).",
"For DeepMet 2 , we evaluated it with/without bagging technique.",
"While DeepMet (Su et al., 2020) exploits two optimization techniques, bagging and ensemble, we only used a bagging technique for MelBERT and DeepMet.",
"It is because we want to evaluate the effectiveness of model designs.",
"The performance difference for DeepMet between the original paper and ours thus comes from the usage of the ensemble method.",
"For contextualized models, we used a pre-trained RoBERTa 3 with 12 layers, 12 attention heads in each layer, and 768 dimensions of the hidden state.",
"For contextualized baselines, we set the same hyperparameters with MelBERT, which were tuned on VUA-18 dev based on F1-score.",
"The batch size and max sequence length were set as 32 and 150.",
"For training, the number of epochs was three with Adam optimizer.",
"1 https://github.com/RuiMao1988/Sequential-Metaphor-Identification 2 https://github.com/YU-NLPLab/DeepMet 3 https://huggingface.co/roberta-base Dataset Model Prec Rec F1 VUA-18 RNN_ELMo 71.6 73.6 72.6 RNN_BERT 71.5 71.9 71.7 RNN_HG 71.8 76.3 74.0 RNN_MHCA 73.0 75.7 74.3 RoBERTa_BASE 79.4 75.0 77.1 RoBERTa_SEQ 80.4 74.9 77.5 DeepMet 82.0 71.3 76.3 MelBERT 80.1 76.9 78.5 DeepMet-CV 77.5 80.2 78.8 MelBERT-CV 78.9 80.7 79.8 VUA-Verb RNN_ELMo 68.2 71.3 69.7 RNN_BERT 66.7 71.5 69.0 RNN_HG 69.3 72.3 70.8 RNN_MHCA 66.3 75.2 70.5 RoBERTa_BASE 76.9 72.8 74.7 RoBERTa_SEQ 79.2 69.8 74.2 DeepMet 79.5 70.8 74.9 MelBERT 78.7 72.9 75.7 DeepMet-CV 76.2 78.3 77.2 MelBERT-CV 75.5 78.7 77.1 Table 2: Performance comparison of MelBERT with baselines on VUA-18 and VUA-Verb (best is in bold and second best is in italic underlined ).",
"We increased the learning rate from 0 to 3e-5 during the first two epochs and then linearly decreased it during the last epoch.",
"We set the dropout ratio as 0.2.",
"All experimental results were averaged over five runs with different random seeds.",
"We conducted all experiments on a desktop with 2 NVidia TITAN RTX, 256 GB memory, and 2 Intel Xeon Processor E5-2695 v4 (2.10 GHz, 45M cache).",
"We implemented our model using PyTorch.",
"All the source code is available at our website 4 .",
"Overall results .",
"Tables 2 and 3 report the comparison results of MelBERT against other baselines using RNNs and contextualized models on VUA-18, VUA-20, and VUA-Verb.",
"It is found that MelBERT is consistently better than strong baselines in terms of F1-score.",
"MelBERT outperforms (F1 = 78.5, 75.7, and 72.3) DeepMet (Su et al., 2020) with 2.8%, 1.0%, and 1.9% performance gains on the three datasets.",
"MelBERT also outperforms contextualized baseline models ( i.e. , RoBERTa_BASE and RoBERTa_SEQ), up to 1.2-1.5% gains on the three datasets, indicating that MelBERT effectively utilizes metaphorical identification theories.",
"When combining MelBERT and DeepMet with the bagging technique, both models ( i.e. , MelBERT-CV and DeepMet-CV) show better performance than their original models by aggregating multiple models trained with 10-fold cross-validation process as used in (Su et al., 2020).",
"MelBERT-POS Model Prec Rec F1 Verb RNN_ELMo 68.1 71.9 69.9 RNN_BERT 67.1 72.1 69.5 RNN_HG 66.4 75.5 70.7 RNN_MHCA 66.0 76.0 70.7 RoBERTa_BASE 77.0 72.1 74.5 RoBERTa_SEQ 74.4 75.1 74.8 DeepMet 78.8 68.5 73.3 MelBERT 74.2 75.9 75.1 Adjective RNN_ELMo 56.1 60.6 58.3 RNN_BERT 58.1 51.6 54.7 RNN_HG 59.2 65.6 62.2 RNN_MHCA 61.4 61.7 61.6 RoBERTa_BASE 71.7 59.0 64.7 RoBERTa_SEQ 72.0 57.1 63.7 DeepMet 79.0 52.9 63.3 MelBERT 69.4 60.1 64.4 Adverb RNN_ELMo 67.2 53.7 59.7 RNN_BERT 64.8 61.1 62.9 RNN_HG 61.0 66.8 63.8 RNN_MHCA 66.1 60.7 63.2 RoBERTa_BASE 78.2 69.3 73.5 RoBERTa_SEQ 77.6 63.9 70.1 DeepMet 79.4 66.4 72.3 MelBERT 80.2 69.7 74.6 Noun RNN_ELMo 59.9 60.8 60.4 RNN_BERT 63.3 56.8 59.9 RNN_HG 60.3 66.8 63.4 RNN_MHCA 69.1 58.2 63.2 RoBERTa_BASE 77.5 60.4 67.9 RoBERTa_SEQ 76.5 59.0 66.6 DeepMet 76.5 57.1 65.4 MelBERT 75.4 66.5 70.7 Table 5: Model performance of different POS tags in VUA-18 (best is in bold and second best is in italic underlined ).",
"CV still shows better performance for all metrics than DeepMet-CV in VUA-18 and VUA-20.",
"Also, MelBERT-CV (Recall = 73.7) significantly improves the original MelBERT (Recall = 68.6) in terms of recall.",
"It implies that MelBERT-CV can capture various metaphorical expressions by combining multiple models.",
"Besides, it is found that contextualization-based models show better performance than RNN-based models in VUA-18 and VUA-Verb.",
"While RNN-based models show 71-74% F1-score, contextualization-based models show 76-78% F1-score on VUA-18.",
"It is revealed that RNN-based models are limited in capturing various aspects of words in context.",
"Compared to RNN_ELMo and RNN_BERT, it also indicates that utilizing contextualization-based models as backbone models can have a better effect than simply utilizing it as an extra input embedding vector in (Gao et al., 2018; Mao et al., 2019).",
"VUA-18 breakdown analysis .",
"Table 4 reports the comparison results for four genres in the VUA-18 dataset.",
"MelBERT still shows better than or comparable to all competitive models in both breakdown datasets.",
"Compared to RNN-based models, MelBERT achieves substantial improvements, as high as 4.9% (Academic), 4.4% (Conversation), 10.2% (Fiction), and 2.8% (News) in terms of F1-score.",
"Particularly, they show the lowest accuracy because Conversation and Fiction have more complicated or rare expressions than other genres.",
"For example, Conversation contains colloquial expressions or fragmented sentences such as ah, cos, yeah and Fiction often contains the names of fictional characters such as Tepilit, Laibon which do not appear in other genres.",
"Nonetheless, MelBERT shows comparable or the best performance in all genres.",
"For Academic and Fiction, MelBERT particularly outperforms all the models in terms of F1-score.",
"Table 5 reports the comparison result for four POS tags in the VUA-18 dataset.",
"For all POS tags, MelBERT consistently shows the best performance in terms of the F1-score.",
"Compared to RNN-based models, MelBERT achieves as much as 5.9% (Verb), 3.4% (Adjective), 14.5% (Adverb), and 10.3% (Noun) gains in terms of F1-score.",
"For all POS tags, MelBERT also outperforms DeepMet.",
"It means that MelBERT using metaphorical identification theories can achieve consistent improvements regardless of POS tags of target words.",
"Zero-shot transfer on MOH-X and TroFi .",
"We evaluate a zero-shot learning transfer across different datasets, where the models are trained with the VUA-20 training dataset, and MOH-X and TroFi are used as test datasets.",
"Although it is a challenging task, it is useful for evaluating the gener-Model VUA-18 VUA-20 Prec Rec F1 Prec Rec F1 MelBERT 80.1 76.9 78.5 76.4 68.6 72.3 (-) MIP 77.8 75.8 76.7 74.7 67.8 71.1 (-) SPV 79.5 76.3 77.9 74.9 68.6 71.7 Table 7: Effect of different metaphorical identification theories on VUA-18 and VUA-20.",
"alization power of trained models.",
"Table 6 reports the comparison results of MelBERT against other contextualization-based models.",
"For the MOH-X dataset, MelBERT (F1 = 79.2) shows the best performance in terms of F1-score with 0.61.6% performance gains.",
"It indicates that MelBERT is an effective generalization model.",
"For the TroFi dataset, the overall performance of all the models is much lower than MOH-X.",
"It is because the average length of the sentences in the TroFi dataset is much longer and sentences are more complicated than those in MOH-X.",
"Also, note that we trained DeepMet with the VUA-20 training dataset for evaluating a zero-shot transfer, while (Su et al., 2020) reported the results for DeepMet trained and tested with the MOH-X and TroFi datasets.",
"While the performance gap between models is much small in terms of precision, MelBERT is better than DeepMet in terms of recall.",
"It means that MelBERT can capture complicated metaphorical expressions than DeepMet.",
"Ablation study of MelBERT .",
"Table 7 compares the effectiveness of metaphor identification theories.",
"It is found that MelBERT using both strategies consistently shows the best performance.",
"Also, MelBERT without SPV shows better performance than MelBERT without MIP, indicating that MelBERT using late interaction is more effective for capturing the difference between contextualized and isolated meanings of target words.",
"Nonetheless, MelBERT shows the best performance by syn-ergizing both metaphor identification strategies.",
"Error analysis .",
"Table 8 reports qualitative evaluation results of MelBERT.",
"Based on the original annotation guideline 5 , we analyze several failure cases of MelBERT.",
"For MelBERT without MIP, it is difficult to find common words with multiple meanings, e.g. , go and feel .",
"Also, when a sentence includes multiple metaphorical words, it mostly fails to detect metaphorical words.",
"In this case, 5 http://www.vismet.org/metcor/documentation/home.html (-) MIP (-) SPVM e l BERT Sentence (cid:88) (cid:88) Manchester is not alone .",
"the surrounding words of a target word are not a cue to detect metaphors using SPV.",
"Meanwhile, MelBERT without SPV has a failure case if target words are metaphorical for personification.",
"That is, using MIP only, the target word can be closely interpreted by its literal meaning.",
"As the most difficult case, MelBERT often fails to identify metaphorical words for borderline or implicit metaphors, e.g. , Way of the World is poetic.",
"In this work, we proposed a novel metaphor detection model, namely, metaphor-aware late interaction over BERT ( MelBERT ), marrying pre-trained contextualized models with metaphor identification theories.",
"To our best knowledge, this is the first work that takes full advantage of both contextualized models and metaphor identification theories.",
"Comprehensive experimental results demonstrated that MelBERT achieves state-of-the-art performance on several datasets.",
"This work was supported by the National Research Foundation of Korea (NRF) (NRF-2018R1A5A1060031).",
"Also, this work was supported by the Institute of Information & communications Technology Planning & evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-00421, AI Graduate School Support Program and No.2019-0-01590, High-Potential Individuals Global Training Program).",
"The work of Dongwon Lee was in part supported by NSF awards #1742702, #1820609, #1909702, #1915801, and #1934782."
] | [
"abstain",
"method",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Abstract",
"While pre-training techniques are working very well in natural language processing, how to pre-train a decoder and effectively leverage it for neural machine translation (NMT) still remains a tricky issue.",
"The main reason is that the cross-attention module between the encoder and decoder cannot be pre-trained, and the combined encoder-decoder model cannot work well in the fine-tuning stage because the inputs of the decoder cross-attention come from unknown encoder outputs.",
"In this paper, we propose a better pre-training method for NMT by defining a semantic interface ( SemFace ) between the pre-trained encoder and the pre-trained decoder.",
"Specifi-cally, we propose two types of semantic interfaces, including CL-SemFace which regards cross-lingual embeddings as an interface, and VQ-SemFace which employs vector quantized embeddings to constrain the encoder outputs and decoder inputs in the same language-independent space.",
"We conduct massive experiments on six supervised translation pairs and three unsupervised pairs.",
"Experimental results demonstrate that our proposed SemFace can effectively connect the pre-trained encoder and decoder, and achieves significant improvement by 3.7 and 1.5 BLEU points on the two tasks respectively compared with previous pre-training-based NMT models.",
"In recent years, pre-trained language models (Pe-ters et al., 2018; Devlin et al., 2018; Radford et al., 2019; Yang et al., 2019; Raffel et al., 2020) significantly boost the performances of various natural language processing (NLP) tasks, receiving extensive attention in NLP communities.",
"Following the idea of unsupervised pre-training methods in the NLP area, several approaches (Lample and Conneau, 2019; Zhu et al., 2020; Lewis et al., 2020; Contribution during internship at MSRA. Liu et al., 2020) have been proposed to improve neural machine translation (NMT) models with pretraining by leveraging the large-scale monolingual corpora.",
"The typical training process usually consists of two stages: pre-training an encoder and a decoder separately with a large monolingual corpus in a self-supervised manner, and then fine-tuning on specific NMT tasks (Lample and Conneau, 2019).",
"The above method essentially pre-trains a BERT-like (Devlin et al., 2019) Transformer encoder, and uses it to initialize both the encoder and decoder.",
"Although it shows promising results, pre-training decoder benefits little in their results.",
"The potential reason is that the cross-attention between the encoder and decoder is not pre-trained, which is randomly initialized when they are connected for fine-tuning, resulting in a lack of semantic interfaces between the pre-trained encoder and decoder.",
"Another line of work attempts to pre-train a sequence-to-sequence model directly, e.g., MASS (Song et al., 2019) and BART (Lewis et al., 2020).",
"But these methods usually use monolingual denoising auto-encoder as the main training objective, and cannot learn the corss-lingual mapping between source and target languages explicitly.",
"In parallel to the idea of DALL E 1 which defines the cross-modality interface of image and text, we propose to pre-train the encoder and decoder with a language-independent semantic interface ( SemFace ) for neural machine translation.",
"With the semantic interface, the encoder is pre-trained to extract features to this space, and the decoder is pre-trained to generate contents with features provided by it.",
"By defining this interface, we can decouple the encoder-decoder network and pre-train them separately.",
"During the decoder pre-training, the cross-attention module is also pre-trained, thus the pre-trained encoder and decoder can be naturally 1 https://openai.com/blog/dall-e/ Figure 1: Overview of our method (Top: pre-training; Bottom: fine-tuning).",
"connected for MT fine-tuning.",
"We propose two types of semantic interfaces, namely CL-SemFace and VQ-SemFace .",
"The former takes the trained unsupervised cross-lingual embeddings (Artetxe et al., 2018) as the interface for encoder and decoder pretraining.",
"Inspired by the success of neural discrete representation learning (Van Den Oord et al., 2017), the latter uses language-independent vector quantized (VQ) embeddings (semantic unites) as the interface to map encoder outputs and decoder inputs into the shared VQ space.",
"Experiments conducted on both supervised and unsupervised translation tasks demonstrate that SemFace effectively connects the pre-trained encoder and decoder, and achieves a significant improvement by 3.7 and 1.5 BLEU points on the two tasks respectively.",
"Our contributions are listed as follows: To the best of our knowledge, this is the first work to investigate and define a semantic interface between encoder and decoder for the MT pre-train-finetune framework.",
"We design and compare two effective types of semantic interfaces, which utilize cross-lingual embeddings and vector quantized embeddings respectively.",
"We extensively verify the effectiveness of our proposed model on supervised and unsupervised NMT tasks.",
"Particularly, our proposed CL-SemFace and VQ-SemFace lead to significant improvements of 3.38 and 3.76 BLUE points on low-resource language pairs.",
"The overview of our proposed SemFace is illustrated in Figure",
"1. As shown in this figure, our method can be divided into two steps.",
"First, we use monolingual data to pre-train encoder and decoder separately with a semantic interface between them.",
"The encoder is pre-trained to map the input from the monolingual semantic space into the interface, while the decoder is pre-trained to use the content from the interface via the cross attention module to finish decoding.",
"The parameters of the encoder and the decoder are updated independently, thus their pre-training processes can be either jointly or separately done.",
"Then, we remove the semantic interface, and connect the pre-trained encoder and decoder with the pre-trained cross-attention as a sequence-to-sequence model for the subsequent machine translation fine-tuning.",
"Note that in Figure 1, the input to the encoder and decoder includes token representations, language embeddings and positional embeddings.",
"There are three types of semantic interface.",
"The first is the default output space of pre-trained encoder with the masked language model (MLM) training loss.",
"In fact, previous work (Song et al., 2019; Lewis et al., 2020; Liu et al., 2020) adopts this default settings in their pre-training method for machine translation.",
"The second one is CL-Figure 2: CL-SemFace, which regards a pre-trained cross-lingual embeddings as a semantic interface.",
"SemFace (Sec. 2.2), which uses the pre-trained context-free cross-lingual embedding space as the semantic interface.",
"The third is VQ-SemFace (Sec. 2.3), which automatically learns a context-aware vector quantized (VQ) embedding space as the interface during pre-training.",
"The last two types define a language-independent interface, enforcing the pre-trained encoder and the decoder to generate or leverage the language-independent information.",
"They can provide a better initialization for the following MT fine-tuning.",
"We give our pre-training algorithm in Alg.",
"1. Note that the parameters of the cross-attention are included in dec .",
"Next, we will introduce our proposed CL-SemFace and VQ-SemFace in detail.",
"CL-SemFace uses the cross-lingual embedding space as the interface between the encoder and the decoder during pre-training.",
"We first concatenate the monolingual corpora of two languages and learn joint BPE, and then train cross-lingual BPE embeddings with VecMap (Artetxe et al., 2018).",
"As shown in Figure 2, on the encoder side, we initialize the linear projection weights (output embeddings) before the Softmax with the pre-trained BPE embeddings, and pre-train the encoder with two training objectives.",
"The first is the commonly used Masked Language Model (MLM) (Devlin et al., 2018) l mlm , and the other is the MSE loss l mse between the encoder output hiddens and the corresponding output embeddings.",
"The latter controls the scale of the encoder outputs to be the same as the cross-lingual embeddings, in order to match the encoder outputs and the cross-attention inputs.",
"To stabilize training, we calculate the MSE loss before the last normalization layer of the encoder.",
"Formally, given an input sample x , the encoder pre-training loss function is: L enc = L mlm + L mse = (cid:88) i [ log p ( x i | LN( h i ( x ))) + ( W i h i ( x )) 2 ] (1) where x i is the masked tokens in the input sentence, h i is the activation of the final layer of the encoder but before the final layer normalization LN , W i is the output embedding of the ground-truth token, and p is the output probability of the Softmax.",
"When pre-training the decoder, we attempt to use the content from the semantic interface to simulate encoder outputs.",
"To achieve that, given a monolingual training sample x , we first add some noise 1 into it to get the noisy sample C ( x )) , then we pass it through an embedding layer initialized with the pre-trained BPE embeddings to get the language-independent representations E ( C ( x )) .",
"The training target of the decoder is either the MLM or the Casual Language Model (CLM) (Lample and Conneau, 2019).",
"Different from them, in our work, the decoder is trained to generate contents with the language-independent representations from the semantic interface.",
"During this process, the parameters of the enc-dec attention (cross-attention) can also be pre-trained, which is critical to the subsequent machine translation fine-tuning.",
"Formally, 1 The noise here includes words dropping and swapping as in Lample et al. (2018).",
"L dec mlm = (cid:88) j log p [ y j | ( s j ( x )) , E ( C ( x ))] (2) or L dec clm = (cid:88) j log p ( y j | ( s <j ( x )) , E ( C ( x ))] (3)",
"where s is the final output hidden of the decoder and p is the output probability of the Softmax.",
"The CL semantic space is constrained with the cross-lingual word embedding, which is context-independent, meaning that the different meanings of the same word share the same embedding, and the number of semantic units should be the same with the size of the vocabulary.",
"In order to learn context-dependent semantic units freely, we also propose another interface type, vector quantized embeddings, inspired by the recent success of VQ-based speech pre-training (Baevski et al., 2020).",
"The concept of Vector Quantized (VQ) representations is first proposed in Van Den Oord et al. (2017).",
"The method uses a learnable code-book combined with the nearest neighbor search to train the discrete latent variable model.",
"The code-book is essentially a group of learnable embeddings (codes) { z } K 1 .",
"The nearest neighbor search is performed between the encoder outputs and the embedding of the latent code using the L 2 distance metric.",
"Formally, given the encoder output h ( x ) , the discrete latent variable assignment is given by z i = arg min j [ K ] || h ( x ) z j || 2 (4) where K is the number of codes in the code-book, z j is j -th quantized vector in the code-book.",
"That means, z i is the output of the VQ layer corresponding to h ( x ) .",
"The main issue of this method is that the arg min operation is not differentiable.",
"Following Baevski et al. (2020), we use the Gumbel-Softmax (Gumbel, 1954; Jang et al., 2016) to select discrete codebook variables in a fully differentiable way and we use the straight-through estimator of Jang et al. (2016).",
"Given the encoder output h ( x ) , we apply a linear layer followed by a ReLU and another linear which outputs l RK logits for the Gumbel-Softmax.",
"During inference, we simply pick the largest index in l .",
"During training, the output probability to choose the j -th code is p j = exp( l j + v j ) / (cid:80) Kk =1 exp( l k + v k ) / (5) where v = log( log( u )) and u are uniform samples from U (0 , 1) .",
"In the forward pass, only the embedding in the code-book with the largest probability is used, which means the output of the VQ layer is z i , where i = arg max i p i , while in the backward pass, the gradient is passed to all the Gumbel-Softmax outputs.",
"The VQ layer groups the context-aware hidden states into limited semantic units (codes), and the space of these codes can be used as our second language-independent semantic interface.",
"As shown in Figure 3, for the encoder, we add a VQ layer between the encoder output and the prediction layer of MLM.",
"The training loss is the combination of the original MLM loss and two auxiliary losses as used in Baevski et al. (2020).",
"The first is the diversity loss L d to encourage the model to use the code-book entries equally often by maximizing the entropy of the averaged Softmax distribution over the codes across a batch of utterances as L d = 1 KK (cid:88) k =1 p k log p k (6) where p k is the averaged probability of choosing the k -th code in the code-book across a batch, and p k is calculated by",
"Eq.(5).",
"The second auxiliary loss is an L 2 penalty to stabilize the training, which is applied to the activations of the final encode layer but before the last normalization of the encoder.",
"Therefore, the total loss of encoder pre-training is L enc = L mlm + L d + L 2 .",
"For the decoder, similar to CL-SemFace, we also use the content from the VQ interface to simulate the encoder output during pre-training.",
"To get the VQ output, given a training sample, we first feed it into an embedding layer and then pass the readout embeddings to a two-layer Transformer, which can be regarded as a feature extractor.",
"We use the Transformer output as the representations of each word and find the corresponding codes in the codebook according to",
"Eq.(5).",
"The readout codes are the simulated encoder output, and they will be fed into the decoder via the cross-attention.",
"Note that in the decoder pre-training, the VQ code-book is fixed.",
"The training goal of the decoder is the same as that in CL-SemFace, i.e., L dec mlm or L dec clm .",
"The semantic interface acts as a bridge to connect the encoder and decoder during pre-training.",
"The encoder is pre-trained to project the input to the features in the semantic interface space, while the decoder is pre-trained to leverage the features from the interface space through the cross-attention to generate outputs.",
"With this method, we can pretrain all the parameters of the whole sequence-to-sequence model, including the cross-attention between the encoder and the decoder.",
"After pretraining, we connect the encoder and the decoder via the cross-attention directly by removing the semantic interface as shown in Figure 1 (bottom).",
"We then fine-tune the model on low-resource supervised NMT tasks and unsupervised NMT tasks.",
"For the low-resource settings, we use the standard cross-entropy loss log p ( y | x ) given the parallel training sample { x , y } , and for the unsupervised settings, we use the denoising auto-encoder and iterative back-translation as the objectives as in Lample and Conneau (2019).",
"The languages we choose for our experiments are English (en), French (fr), German (de), Romanian (ro), Finnish (fi), Estonian (et), Latvian (lv), Lithuanian (lt), Gujarati (gu), and Kazakh (kk).",
"The details of the datasets and statistics for each language pair are listed in Table",
"1. All the data is provided by the recent WMT translation tasks.",
"Para Data in this table means the number of training samples of x-en.",
"The language pairs with parallel data in the table are chosen for the low-resource supervised settings, while those with only monolingual data are chosen for the unsupervised scenario only.",
"For the language with more than 50 million monolingual data, we randomly sample 50 million from the corpus.",
"We choose the corresponding development and test sets for each language pair from WMT translation tasks, as listed in Table",
"2. Lang Mono Data Source #Sent Para Data en NC 50M fr NC 50M de NC 50M ro NC 21M fi NC, CC 50M 2.7M et NC, CC, BE 50M 1.9M lv NC, CC 38M 4.5M lt NC, CC, Wiki 50M 2.1M gu NC, CC, Wiki 4.3M 10K kk NC, CC, Wiki 12.7M 91K Table 1: The datasets used in our experiments.",
"We compare our method with two baselines.",
"The first is XLM (Lample and Conneau, 2019), which pre-trains a Transformer encoder with the MLM or CLM loss and then initializes the encoder and the decoder with the pre-trained model.",
"The parameters of the cross-attention module are randomly initialized.",
"The second baseline is mBART (Liu et al., 2020), which pre-trains the whole sequence-to-sequence architecture with the denoising auto-encoder loss on the multilingual corpus.",
"For a fair Method en-fi en-et en-lt en-lv en-gu en-kk avg.",
"comparison, we use their pre-training method on the concatenated corpora of each language pair, i.e., mBART02 in their paper.",
"For the low-resource supervised settings, we also compare our method with the basic Transformer without pre-training.",
"If there is a parallel corpus for a certain language pair, we use the parallel data to fine-tune the pre-trained models in the two baselines.",
"If there is only a monolingual corpus, we use the denoising auto-encoder and iterative back-translation to fine-tune the pre-trained models.",
"We implement our method based on the code released by Lample and Conneau (2019).",
"For each language pair, we first lower-case all the case-sensitive languages by default and pre-process the concatenated corpora of each language pair with 60,000 joint BPE codes.",
"For both encoder and decoder, we use 6-layer Transformers with the embedding and hidden dimensions of 1024, 8 attention heads, and a dropout rate of 0.1.",
"The maximum sequence length is 256 and the batch size is 128.",
"We use the Adam optimizer (Kingma and Ba, 2014) for both pre-training and fine-tuning.",
"During pre-training, the learning rate is 0.0001 constantly.",
"During MT fine-tuning, the learning rate is 0.0001 with 4,000 warm-up steps, and then decayed based on the inverse square root of the update number.",
"The loss of the denoising auto-encoder objective is weighted by a coefficient , and it is linearly decreased to 0.1 in the first 100,000 steps and decreased to 0 in the next 200,000 steps.",
"For VQ-SemFace, the code-book contains 102,400 codes with their dimensions being 1024.",
"In this section, we report the result of our pretraining method fine-tuned with neural machine translation.",
"We have two settings.",
"The first setting is low-resource supervised machine translation, which uses additional parallel corpus to fine-tune the pre-trained encoder and decoder.",
"The second is unsupervised neural machine translation, which uses the two objectives of denoising auto-encoder and back-translation to fine-tune the model.",
"The results on the low-resource language pairs are shown in Table",
"3. From the table, we see that our proposed methods CL-SemFace and VQ-SemFace significantly outperform the non-pre-training Transformer with an average improvement of over 3 BLEU scores.",
"Compared with the strong baseline mBART, our methods also outperform it by 0.8 to 1.2 BLEU scores.",
"For most translation directions, VQ-SemFace is better than CL-SemFace, maybe due to the lower quality of cross-lingual language embeddings of these language pairs, especially for the distant language pairs (en-gu and en-kk).",
"This also shows the shortcomings of the CL-SemFace that it depends on the quality of the cross-lingual embeddings.",
"If the quality is not good, the semantic interface will be far from language-independent, posing difficulties for the splicing of the pre-trained encoder and the pre-trained decoder.",
"By contrast, VQ-SemFace gets rid of the constraints of cross-lingual embeddings and learns a context-dependent semantic space shared across languages, which can handle those language pairs with low-quality cross-lingual embeddings better.",
"We also report the results of three unsupervised language pairs in Table",
"4. From the table, we find our proposed methods also significantly outperform the baseline XLM over 1 BLEU score.",
"Compared with mBART, we also obtain an improvement of nearly 0.9 BLEU score (CL-SemFace).",
"Contrary to the result of low-resource pairs in Table 3, for the language pairs in Table 4, we see the performance of CL-SemFace is better than VQ-SemFace.",
"This Method en-fr en-de en-ro avg.",
"may be because the cross-lingual embeddings of these rich-resource language pairs are of higher quality, thus the semantic interface is initialized better during the pre-training.",
"In this subsection, we first investigate the influence of the encoder losses (Eq. 1) by removing each of them independently in the encoder pre-training.",
"Besides, note that there are two types of loss used in our decoder pre-training, MLM and CLM, as shown in Eq.",
"(2,3), so we also compare the results with different losses in decoder pre-training, taking the supervised pair en-fi and unsupervised pair en-ro as examples.",
"From the table, we find that for VQ-SemFace under encoder pre-training, the most influential auxiliary loss is the diversity loss L d , which contributes 4.33 BLEU scores in the final results, which is designed to encourage the model to use the codebook entries equally often.",
"According to our observation, without L d , the model only uses a small group of codes in the code-book ( < 30%), which indeed shrinks the VQ semantic space and leads to the bad performance.",
"L mse and L 2 have a similar effect that stabilizes the training, contributing about 1 BLEU score in the final result.",
"For decoder pre-training, the performance of the two losses is comparable, with the MLM slightly better.",
"In this section, we investigate the influence of the data quantity in the experiments.",
"The language pair we choose is de-en, which has a large parallel corpus and makes it possible to conduct our investigation.",
"We compare the performance of the model with our pre-training method and the model without pre-training.",
"Note that we do not use any monolingual data in the training so the result here is not comparable with that in Table",
"4. Figure 4: Test BLEU of de-en",
"As shown in Figure 4, when the number of parallel training data is less than 10 6 .",
"7 5M , the model with pre-training significantly outperforms the non-pre-training model by about 3 to 5 BLEU scores.",
"However, when the training samples in-crease to over 10M, there is almost no difference in performance between the two models.",
"As mentioned in Sec.2.3, VQ space could be regarded as a language-independent semantic interface for the encoder and decoder pre-training.",
"To test whether VQ space is trained to contain cross-lingual representations, we carry out an analysis with a parallel sample of de-en.",
"For each token pair ( w en , w de ) in the two sentences, we collect top-100 codes according to Eq.",
"(5), and calculate how much the codes overlapped, as code 100 ( w en ) code 100 ( w de ) 100 .",
"As shown in Figure 5, the translated tokens share much of the codes chosen from the VQ code-book, which verifies our motivation that VQ could act like a language-independent semantic interface.",
"Pre-training has been widely used in NLP tasks to learn better language representations (Peters et al., 2018; Devlin et al., 2018; Lample and Conneau, 2019; Radford et al., 2019; Yang et al., 2019; Dong et al., 2019; Lewis et al., 2020).",
"Typically, these methods first pre-train neural networks on large-scale unlabeled corpora, and then fine-tune the models on downstream tasks (Devlin et al., 2018).",
"The early pre-training techniques mainly focused on the natural language understanding tasks such as the GLUE benchmark (Wang et al., 2018) , and later it was gradually extended to the natural language generation tasks, e.g., NMT.",
"Recently, a prominent line of work has been proposed to improve NMT with pre-training.",
"These techniques can be broadly classified into two categories.",
"The first category usually uses pre-trained models as feature extractors of a source language, or initializes the encoder and decoder with pre-trained models separately (Lample and Conneau, 2019; Ren et al., 2019; Yang et al., 2020a; Zhu et al., 2020).",
"For example, Lample and Conneau (2019) proposed a cross-lingual language model with a supervised translation language modeling objective, and used MLM or CLM to pre-train the encoder and decoder of NMT.",
"However, the combined encoder-decoder model, where the cross-attention is randomly initialized, often does not work well because of the lack of semantic interfaces between the pre-trained encoder and decoder.",
"There is also some work trying to leverage BERT-like pre-trained models for MT with an adapter (Guo et al., 2020) or an APT framework (Weng et al., 2020).",
"The former defines additional layers in the pre-trained encoder and decoder during fine-tuning, while the last adopts a fusion mechanism or knowledge distillation to leverage knowledge in BERT for MT. Different from them, we enable the encoder and decoder to interact with a semantic interface during pre-training, and they can be connected directly for the MT fine-tuning without any other additional layers or training loss.",
"The second category methods pre-train a whole sequence-to-sequence model for NMT.",
"MASS (Song et al., 2019) employed the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence.",
"BART (Lewis et al., 2020) adopted a similar framework and trained the model as a denoising auto-encoder.",
"mBART (Liu et al., 2020) trained BART model on large-scale monolingual corpora in many languages.",
"Although the above work can pre-train the cross-attention of decoder, they are learned on monolingual denoising auto-encoding and cannot learn the corss-lingual transformation between source and target languages.",
"There is also some work trying to explicitly introduce cross-lingual information in a code-switch way during the sequence-to-sequence pre-training, such as CSP (Yang et al., 2020b) and mRASP (Lin et al., 2020).",
"However, their methods need a lexicon or phrase translation table, which is inferred from unsupervised cross-lingual embeddings.",
"Therefore, they depend on the quality of the dictionary.",
"The most similar work to ours is probably the one of DALL E and CLIP (Radford et al., 2020).",
"DALL E is a transformer language model that receives both the text and the image as a single stream of data.",
"The core idea is to define the cross-modality interface of image and text, which can generate images from text descriptions.",
"In this paper, to address the above limitations of pretraining methods for NMT, we attempt to define a cross-lingual semantic interface to connect the pre-trained encoder and decoder.",
"We propose SemFace, a better pre-training method for neural machine translation.",
"The key point is to use a semantic interface to connect the pre-trained encoder and decoder.",
"By defining this interface, we can pre-train the encoder and decoder separately with the same intermediate language-independent space.",
"The cross-attention can also be pre-trained with our method so that we can naturally combine the pre-trained encoder and decoder for fine-tuning.",
"We introduce and compare two semantic interfaces, e.g., CL-SemFace and VQ-SemFace, which leverage unsupervised cross-lingual embeddings and vector quantized embeddings as the intermediate interfaces respectively.",
"Massive experiments on supervised and unsupervised NMT translation tasks show that our proposed SemFace obtains substantial improvements over the state-of-the-art baseline models.",
"In the future, we will design and test more semantic interface types for extensions.",
"This work is supported in part by National Key R&D Program of China 2018AAA0102301 , and NSFC 61925203 ."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"method",
"method",
"method",
"objective",
"method",
"other"
] |
[
"Recent work has shown that fine-tuning large networks is surprisingly sensitive to changes in random seed(s).",
"We explore the implications of this phenomenon for model fairness across demographic groups in clinical prediction tasks over electronic health records (EHR) in MIMIC-III the standard dataset in clinical NLP research.",
"Apparent subgroup performance varies substantially for seeds that yield similar overall performance, although there is no evidence of a trade-off between overall and subgroup performance.",
"However, we also find that the small sample sizes inherent to looking at intersections of minority groups and somewhat rare conditions limit our ability to accurately estimate disparities.",
"Further, we find that jointly optimizing for high overall performance and low disparities does not yield statistically significant improvements.",
"Our results suggest that fairness work using MIMIC-III should carefully account for variations in apparent differences that may arise from stochas-ticity and small sample sizes.",
"Fine-tuning pre-trained transformers (Vaswani et al., 2017) such as BERT (Devlin et al., 2019) has become the dominant paradigm in NLP, owing to their performance across a range of downstream tasks.",
"Clinical NLP in which we often aim to make predictions on the basis of notes in electronic health records (EHRs) is no exception (Alsentzer et al., 2019).",
"However, fine-tuning large networks is a stochastic process.",
"Performance can vary considerably as a function of hyperparame-ter choice, and many parameter sets can yield the same validation accuracy (i.e., the model is not identifiable), and more generally the problem is underspecified (D'Amour et al., 2020).",
"Recent work has demonstrated that the choice of random seeds alone can have dramatic impact on model performance in NLP and beyond, even when all men women white black asian hispanic group 0 .",
"other hyper-parameters are kept fixed (Phang et al., 2018; Dodge et al., 2020; D'Amour et al., 2020).",
"In this work, we explore the intersection of randomness and fairness in the context of clinical NLP.",
"Fairness is a particularly acute concern in clinical predictive tasks, given the potential of such models to influence treatment decisions.",
"This has motivated work investigating biases in predictive models trained over EHR (Zhang et al., 2020; Chen et al., 2018, 2019, 2020a; Pfohl et al., 2020; Chen et al., 2020b; Tripathi et al., 2020).",
"We investigate the impact of random seeds on the fairness of fine-tuned classifiers with respect to demographic characteristics such as gender and ethnicity.",
"There are many definitions of algorithmic fairness which formalize different desired properties (Mehrabi et al., 2019).",
"Following prior work, here we adopt a simple measure: The mean differences in model performance across demographic subgroups (Chen et al., 2019).",
"We find that, on the popular MIMIC-III dataset (Johnson et al., 2016), seeds with comparable validation performance can give rise to large variations in disparities across demographic subgroups (Figure 1).",
"We investigate the variability of overall model performance and fairness across random seeds for a set of clinical prediction tasks derived from the Mul-tiparameter Intelligence Monitoring in Intensive Care (MIMIC-III) set of Electronic Health Records (EHRs; Johnson et al. 2016).",
"For each task, we train a classifier on top of the contextualized representations of a BERT (Devlin et al., 2019) model pretrained over EHR data (Alsentzer et al., 2019).",
"Following recent work exploring randomness and fine-tuning, we consider the seeds used to shuffle the training data and to initialize the model parameters independently (Dodge et al., 2020).",
"Specifically, we generate K = 1000 pairs of shuffling and initialization seeds by sampling from a uniform distribution U (0 , 10000) .",
"For each seed pair, we measure the overall performance as well as the performance for each demographic subgroup in terms of the Area Under the ROC Curve (AUC).",
"MIMIC-III is a database of deidentified EHR comprising over 40k patients admitted to the intensive care unit of the Beth Israel Deaconess Medical Center between 2001 and 2012 (Johnson et al., 2016).",
"It comprises structured variables including vital sign measurements, lab test results, and medications.",
"It also contains clinical notes (e.g., doctor and nursing notes, radiology reports, and discharge summaries), which are the focus of our analysis.",
"MIMIC-III contains demographic information, including potentially sensitive attributes such as ethnicity/race, sex, spoken language, religion, and insurance status (which may be seen as a proxy for socioeconomic status (Chen et al., 2019)).",
"We are interested in the interaction between randomness and fairness in clinical predictions.",
"Following recent prior work (Zhang et al., 2020) on the latter, we focus our analyses on two benchmark tasks proposed by Harutyunyan et al. (2019): In-hospital Mortality (IHM) Predict risk of in-hospital mortality based on the first 48 hours of an ICU stay.",
"Phenotype Classification (PC) Classify which of 25 acute or chronic conditions (e.g., acute cerebrovascular disease, chronic kidney disease) are present in a given patient ICU stay record.",
"Similar to Zhang et al. (2020), we treat each condition as an independent binary classification task.",
"Table A.1 in the Appendix enumerates the full set of conditions and their respective prevalences.",
"We extracted training and test datasets for these tasks using the same pre-processing pipeline as Zhang et al. (2020).",
"1 We kept the same data splits and reserved 20% of the training data as validation set per task.",
"For each patient, we collected their clinical notes, as well as their gender and race/ethnicity (as recorded in the EHR).",
"The notes were filtered according to the categories Nurse , Physician and Nursing/Other to avoid notes of poor semantic quality, as suggested by Zhang et al. (2020).",
"Patients without relevant clinical notes were discarded, resulting in 11384 / 2591 and 22033 / 4919 training/test examples for the IHM and PC tasks, respectively.",
"It should be noted that these datasets are highly imbalanced both in terms of labels and demographic distribution with 55% Male , 85% White , 9% Black , 3% Asian and 3% Hispanic patients.",
"Table 1 shows the distribution of sample sizes across subgroups for each benchmark.",
"We define text classifiers for clinical tasks that map clinical notes corresponding to individual patients to binary labels.",
"We extract contextualized embed-dings from notes using a pretrained Transformer encoder and then map these to outputs (predictions) via a linear layer.",
"Transformers are feedforward networks and require fixed-length inputs.",
"To handle longer sequences, we adopt an approach used in prior works (Huang et al., 2019; Zhang et al., 2020).",
"Given an input sequence, we: (1) Extract N subsequences with sizes equal to that expected by the Transformer input layer; (2) Make individual 1 https://github.com/MLforHealth/ HurtfulWords predictions on the basis of each subsequence, and; (3) Then aggregate them into a final prediction.",
"More formally, an encoder operates over inputs of size E with H -dimensional hidden layers.",
"Given a patient's clinical notes X , we extract a set of N subsequences of length E , x = {{ w 11 , . . . , w 1 E } , . . . , { w N 1 , . . . , w NE }} X We construct a matrix Z RH N such that the n th column represents subsequence x n , Z [: , n ] = ( x n ) = (cid:80) j z x n j where z x n j RH is the embedding produced by the last hidden layer of the encoder for token j in the context of x n .",
"We then use a linear layer followed by a sigmoid activation to produce a prediction vector Y , encoding the class conditional probabilities for each subsequence.",
"This vector is then used to calculate the final probability as P ( Y = 1 | Y ) = Y max + Y mean N/c 1 + N/c , (1) where c is a scaling factor, which we set to c = 2 , following Huang et al. (2019).",
"We implement classifiers with PyTorch using the Transformer encoders from the huggingface 2 library (Wolf et al., 2019).",
"We initialize models to weights from ClinicalBERT (Huang et al., 2019), which was trained over scientific literature and clinical notes.",
"We train classifiers on the most recent N = 10 subsequences of E = 512 tokens from the notes associated with each patient.",
"We train using the ADAM optimizer (Kingma and Ba, 2014) for 500 epochs with early stopping.",
"We set the learning rate to = 0 .",
"01 , which we found to have the best validation performance on average across all tasks.",
"We compare the overall performance with the performance for each subgroup as a function of random seeds.",
"Figure 2 shows the overall performance (left) along with the gap between the best and worst observed subgroup AUCs (right), across tasks.",
"We observe a large variance in both the overall performance and the gap.",
"The former observation corroborates previous findings (Dodge et al., 2020).",
"To quantify how random seeds affect individual subgroups, we measure the the absolute differences ( s) between overall and subgroup performances.",
"We then evaluate whether there are correlations between overall performance and subgroup s.",
"Figures 3 and 4 present the results for the Shock phenotype classification task one of the tasks with largest disparities observed in prior work (Zhang et al., 2020).",
"Similar trends were found for the remaining tasks, and we report all results in the Appendix (Figures A.2-A.3 and A.4-A.6).",
"Figure 3 shows that the performance of all subgroups varies significantly across random seeds and that variances are higher for minority groups.",
"Larger variations in minority subgroups are to be expected, as any empirical estimate will have a variance that is inversely proportional to the sample size of a group.",
"In Figure 4, we observe that there seem to be two distinct clusters of seeds: One corresponding to high performing models (right of plots), and another to suboptimal models.",
"3 While the best performing models tend to have a lower variance of subgroup performance, there is otherwise no clear relationship between overall and subgroup performance.",
"Indeed, we find that many models with similar overall performance correspond to widely different subgroup s, particularly for the minority groups.",
"To explore the implications of this phenomenon, we simulate a grid search over all the random seeds on the validation set.",
"We select the best seed along with all other seeds with similar performance (i.e., within a difference of (cid:15) = 0 .",
"01 absolute AUC).",
"Figure 1 shows the test set subgroup performance s, for the best validation seeds, in the Shock phenotype classification task (see Figures A.7-A.8 for the other tasks).",
"Figure 5 summarizes the overall performance (left) along with the subgroup performance gap (right) across tasks.",
"We can see that selecting seeds on the basis of overall performance helps to reduce the subgroup performance gap (compare the right subplots in Figures 2 and 5).",
"However, the top performing models show disparities with respect to both gender and ethnicity, suggesting that these models maximize performance for some groups at the expense of others.",
"Moreover, we find multiple seeds with similar levels of validation performance that correspond to very different subgroup s .",
"Since we have not encoded any model selection preferences into the pipeline this variance may re-flect a form of underspecification .",
"Can we define 3 Dodge et al. (2020) also found that some seeds performed consistently well across all the evaluated tasks, while others always performed poorly.",
"criteria that explicitly accounts for subgroup performance?",
"We could then ask whether it is possible to maximize both fairness and overall performance with respect to random seeds.",
"We repeated the grid search experiments with simple criteria that incorporate some notion of subgroup performance, such as selecting the seeds that:",
"(a) maximize subgroup macro-average performance;",
"(b) minimize the average subgroup ; and",
"(c) maximize the overall performance minus the average subgroup .",
"To account for the effect the sample sizes on the apparent subgroup performance, we directly compare subgroup s for each random seed on the validation set and the test set.",
"We find that correlations between validation set and test set fairness are either non-existent or very weak in most tasks.",
"may produce models with similar validation performance but very different levels of apparent fair-ness' as a result of varying the random seed alone.",
"However, the fact that training-set and validation-set fairness are not reliable indicators of test-set fairness suggests that variance due to small subset sizes may be significant.",
"This is in some sense not surprising, given the combination of pronounced class imbalance and small subgroup samples in this data (see Tables 1 and A.1).",
"To confirm this, we repeat the experiments on a subset of the test data containing the same number of examples (equal to the smallest subgroup) for all groups, including majority groups.",
"Evaluating all subgroups using small samples yields similarly high variances in performance (Figure 6 and Appendix Figure A.1), which con-0 .",
"firms that the sample size is a significant factor in the variation of apparent model performance across random seeds.",
"These findings suggest that work investigating the fairness of fine-tuned classifiers should be careful to account for:",
"a) model variability due the choice of random seeds; and",
"b) variance in performance estimates due to small sample sizes.",
"See Appendix Section B for an illustrative example.",
"These observations are relevant for research using MIMIC-III, and for any corpora with similar properties, namely the combination of class imbalance and comparatively small subgroup sizes, which is likely to be present in EHR data where conditions are relatively rare and one is interested in fairness to minority groups (which are smaller by definition).",
"We have investigated the impact of random seeds on the fairness of fine-tuned pre-trained models for clinical tasks.",
"Specifically, we measured gaps in performance across gender and racial subgroups as a function of the choice of random seeds for data shuffling and parameter initialization.",
"In line with prior work, we found that classifiers trained on MIMIC-III data are often biased with respect to demographic subgroups.",
"The contribution of this work is the empirical confirmation that choice of random seed alone significantly affects the apparent bias: Seeds that yield comparable performance in aggregate on the validation data correspond to very different performances on subgroups in test data.",
"Our analyses corroborate Dodge et al. (2020)'s findings on the importance of carefully chosen random seeds, but also suggest that an equal amount of attention should be payed to the impact of these choices on model fairness.",
"However, interpretation of these results is complicated by sample size effects.",
"While MIMIC-III is in itself a large dataset, it also exhibits significant imbalance, both in terms of subgroups of patients and the prevalence of medical conditions.",
"These imbalances compound when considering subsets of patients in the context of specific prediction tasks, which often leads to small sample sizes for minority subgroups.",
"While we observed higher apparent variances for demographic minorities, our results also suggest that these variances can in large part be explained by the smaller sample sizes.",
"Indeed, we found the variances in subgroup performance to be inversely proportional to the size of the subgroup.",
"Fairness has rightly been an issue of increasing concern within the NLP community.",
"This issue is particularly important in clinical NLP, given the potential that such models may ultimately have on patient health.",
"We have investigated the degree to which different subgroup performances may be observed even fixing the (aggregate) validation data performance; we find wide variances across subgroups.",
"That said, this work also highlights inherent limitations of using MIMIC-III (the standard dataset for clinical NLP) to evaluate the fairness of models, given the relatively small samples of patients that belong to demographic groups of interest.",
"We hope these contributions encourage continued research into fairness in the context of clinical NLP.",
"We would like to thank Darius Irani for his contribution in replicating the experiments from (Zhang et al., 2020).",
"This material is based upon work supported in part by the National Science Foundation under Grant No. 1901117."
] | [
"abstain",
"objective",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"State-of-the-art NLP systems represent inputs with word embeddings, but these are brittle when faced with Out-of-Vocabulary (OOV) words.",
"To address this issue, we follow the principle of mimick-like models to generate vectors for unseen words, by learning the behavior of pre-trained embeddings using only the surface form of words.",
"We present a simple contrastive learning framework, LOVE, which extends the word representation of an existing pre-trained language model (such as BERT), and makes it robust to OOV with few additional parameters.",
"Extensive evaluations demonstrate that our lightweight model achieves similar or even better performances than prior competitors, both on original datasets and on corrupted variants.",
"Moreover, it can be used in a plug-and-play fashion with FastText and BERT, where it significantly improves their robustness.",
"Word embeddings represent words as vectors (Mikolov et al., 2013a,b; Pennington et al., 2014).",
"They have been instrumental in neural network approaches that brought impressive performance gains to many natural language processing (NLP) tasks.",
"These approaches use a fixed-size vocabulary.",
"Thus they can deal only with words that have been seen during training.",
"While this works well on many benchmark datasets, real-word corpora are typically much noisier and contain Out-of-Vocabulary (OOV) words, i.e., rare words, domain-specific words, slang words, and words with typos, which have not been seen during training.",
"Model performance deteriorates a lot with unseen words, and minor character perturbations can flip the prediction of a model (Liang et al., 2018; Belinkov and Bisk, 2018; Sun et al., 2020; Jin et al., 2020).",
"Simple experiments (Figure",
"1) show that the addition of typos to datasets degrades the performance for text classification models considerably.",
"To alleviate this problem, pioneering work pretrained word embeddings with morphological features (sub-word tokens) on large-scale datasets (Wi-eting et al., 2016; Bojanowski et al., 2017; Heinzerling and Strube, 2017; Zhang et al., 2019).",
"One of the most prominent works in this direction is FastText (Bojanowski et al., 2017), which incorporates character n-grams into the skip-gram model.",
"With FastText, vectors of unseen words can be imputed by summing up the n-gram vectors.",
"However, these subword-level models come with great costs: the requirements of pre-training from scratch and high memory footprint.",
"Hence, several simpler approaches have been developed, e.g., MIMICK (Pinter et al., 2017), BoS (Zhao et al., 2018) and KVQ-FH (Sasaki et al., 2019).",
"These use only the surface form of words to generate vectors for unseen words, through learning from pre-trained embeddings.",
"tions and alleviate the OOV problem, two main challenges remain.",
"First, the models remain bound in the trade-off between complexity and performance: The original MIMICK is lightweight but does not produce high-quality word vectors consistently.",
"BoS and KVQ-FH obtain better word representations but need more parameters.",
"Second, these models cannot be used with existing pre-trained language models such as BERT.",
"It is these models, however, to which we owe so much progress in the domain (Peters et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2020).",
"To date, these high-performant models are still fragile when dealing with rare words (Schick and Schtze, 2020), misspellings (Sun et al., 2020) and domain-specific words (El Boukkouri et al., 2020).",
"We address these two challenges head-on: we design a new contrastive learning framework to learn the behavior of pre-trained embeddings, dubbed LOVE, L earning O ut-ofV ocabulary E mbeddings.",
"Our model builds upon a memory-saving mixed input of character and subwords instead of n-gram characters.",
"It encodes this input by a lightweight Positional Attention Module.",
"During training, LOVE uses novel types of data augmentation and hard negative generation.",
"The model is then able to produce high-quality word representations that are robust to character perturbations, while consuming only a fraction of the cost of existing models.",
"For instance, LOVE with 6.5M parameters can obtain similar representations as the original FastText model with more than 900M parameters.",
"What is more, our model can be used in a plug-and-play fashion to robustify existing language models.",
"We find that using LOVE to produce vectors for unseen words improves the performance of FastText and BERT by around 1.4-6.8 percentage points on noisy text without hampering their original capabilities (As shown in Figure 2).",
"In the following, Section 2 discusses related work, Section 3 introduces preliminaries, Section 4 presents our approach, Section 5 shows our experiments, and Section 6 concludes.",
"The appendix contains additional experiments and analyses.",
"Our code and data is available at https: //github.com/tigerchen52/LOVE 2 Related Work 2.1 Character-level Embeddings To address OOV problems, some approaches inject character-level features into word embeddings during the pre-training (Wieting et al., 2016; Cao and Rei, 2016; Bojanowski et al., 2017; Heinzerling and Strube, 2017; Kim et al., 2018; Li et al., 2018; stn et al., 2018; Piktus et al., 2019; Zhu et al., 2019; Zhang et al., 2019; Hu et al., 2019).",
"One drawback of these methods is that they need to pre-train on a large-scale corpus from scratch.",
"Therefore, simpler models have been developed, which directly mimic the well-trained word embeddings to impute vectors for OOV words.",
"Some of these methods use only the surface form of words to generate embeddings for unseen words (Pinter et al., 2017; Zhao et al., 2018; Sasaki et al., 2019; Fukuda et al., 2020; Jinman et al., 2020), while others use both surface and contextual information to create OOV vectors (Schick and Schtze, 2019a,b).",
"In both cases, the models need an excessive number of parameters.",
"FastText, e.g., uses ~2 million n-gram characters to impute vectors for unseen words.",
"Currently, the state-of-the-art word representations are pre-trained language models, such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and XLnet (Yang et al., 2019), which adopt subwords to avoid OOV problems.",
"However, BERT is brittle when faced with rare words (Schick and Schtze, 2020) and misspellings (Sun et al., 2020).",
"To make BERT more robust, Charac-terBERT (El Boukkouri et al., 2020) and Char-BERT (Ma et al., 2020) infuse character-level features into BERT and pre-train the variant from 3489 scratch.",
"This method can significantly improve the performance and robustness of BERT, but requires pre-training an adapted transformer on a large amount of data.",
"Another work on combating spelling mistakes recommends placing a word corrector before downstream models (Pruthi et al., 2019), which is effective and reusable.",
"The main weakness of this method is that an error generated by the word corrector propagates to downstream tasks.",
"For example, converting aleph to alpha may break the meaning of a mathematical statement.",
"And indeed, using the word corrector consistently leads to a drop (0.5-2.0 percentage points) in BERT's performance on the SST dataset (Socher et al., 2013).",
"The origin of contrastive learning can be traced back to the work by Becker and Hinton (1992) and Bromley et al. (1993).",
"This method has achieved outstanding success in self-supervised representation learning for images (Oord et al., 2018; Hjelm et al., 2018; He et al., 2020; Chen et al., 2020; Grill et al., 2020).",
"The contrastive learning framework learns representations from unlabeled data by pulling positive pairs together and pushing negative pairs apart.",
"For training, the positive pairs are often obtained by taking two randomly augmented versions of the same sample and treating the other augmented examples within a mini-batch as negative examples (Chen et al., 2017, 2020).",
"The most widely used loss is the infoNCE loss (or contrastive loss) (Hjelm et al., 2018; Logeswaran and Lee, 2018; Chen et al., 2020; He et al., 2020).",
"Although many approaches adopt contrastive learning to represent sentences (Giorgi et al., 2020; Wu et al., 2020; Gao et al., 2021), it has so far not been applied to word representations.",
"Given pre-trained word embeddings, and given an OOV word, the core idea of MIMICK (Pinter et al., 2017) is to impute an embedding for the OOV word using the surface form of the word, so as to mimic the behavior of the known embeddings.",
"BoS (Zhao et al., 2018), KVQ-FH (Sasaki et al., 2019), Robust Backed-off Estimation (Fukuda et al., 2020), and PBoS (Jinman et al., 2020) work similarly, and we refer to them as mimick-like models.",
"Formally, we have a fixed-size vocabulary set V , with an embedding matrix W R |V| m , in which each row is a word vector u w R m for the word w .",
"A mimick-like model aims to impute a vector v w for an arbitrary word w (cid:54) V .",
"The training objective of mimick-like models is to minimize the expected distance between u w and v w pairs: L dis = 1 |V| (cid:88) w V ( u w , v w ) (1) Here, ( ) is a distance function, e.g., the Euclidean distance = (cid:107) u w v w (cid:107) 22 or the cosine distance = 1 cos( u w , v w ) .",
"The vector v w is generated by the following equation: v w = ( ( w )) , for w V or w / V (2) Here, ( ) is a function that maps w to a list of subunits based on the surface form of the word (e.g., a character or subword sequence).",
"After that, the sequence is fed into the function ( ) to produce vectors, and the inside structure can be CNNs, RNNs, or a simple summation function.",
"After training, the model can impute vectors for arbitrary words.",
"Table 1 shows some features of three mimick-like models.",
"Contrastive learning methods have achieved significant success for image representation (Oord et al., 2018; Chen et al., 2020).",
"The core idea of these methods is to encourage learned representations for positive pairs to be close, while pushing representations from sampled negative pairs apart.",
"The widely used contrastive loss (Hjelm et al., 2018; Logeswaran and Lee, 2018; Chen et al., 2020; He et al., 2020) is defined as: (cid:96) cl = log e sim ( u T i u + ) / e sim ( u T i u + ) / + (cid:80) e sim ( u T i u ) / (3) 3490 Here, is a temperature parameter, sim ( ) is a similarity function such as cosine similarity, and ( u i , u + ) , ( u i , u ) are positive pairs and negative pairs, respectively (assuming that all vectors are normalized).",
"During training, positive pairs are usually obtained by augmentation for the same sample, and negative examples are the other samples in the mini-batch.",
"This process learns representations that are invariant against noisy factors to some extent.",
"LOVELOVE (Learning Out-of-Vocabulary Embeddings) draws on the principles of contrastive learning to maximize the similarity between target and generated vectors, and to push apart negative pairs.",
"An overview of our framework is shown in Figure 3.",
"It is inspired by work in visual representation learning (Chen et al., 2020), but differs in that one of the positive pairs is obtained from pre-trained embeddings instead of using two augmented versions.",
"We adopt five novel types of word-level augmentations and a lightweight Positional Attention Module in this framework.",
"Moreover, we find that adding hard negatives during training can effectively yield better representations.",
"We removed the nonlinear projection head after the encoder layer, because its improvements are specific to the representation quality in the visual field.",
"Furthermore, our approach is not an unsupervised contrastive learning framework, but a supervised learning approach.",
"Our framework takes a word from the original vocabulary and uses data augmentation to produce a corruption of it.",
"For example, \"misspelling\" becomes \"mispelling\" after dropping one letter \"s\" .",
"Next, we obtain a target vector from the pre-trained embeddings for the original word, and we generate a vector for the corrupted word.",
"These two vectors are a pair of positive samples, and we maximize the similarity between them while making the distance of negative pairs (other samples in the same mini-batch) as large as possible.",
"As mentioned before, we use the contrastive loss as an objective function (Eq 3).",
"There are five key ingredients in the framework that we will detail in the following (similar to the ones in Table 1): the Input Method, the Encoder, the Loss Function, our Data Augmentation, and the choice of Hard Negatives.",
"Our goal is to use the surface form to impute vectors for words.",
"The question is thus how to design the function ( ) mentioned in Section 3.1 to represent each input word.",
"MIMICK (Pinter et al., 2017) straightforwardly uses the character sequence (see Table 1).",
"This, however, loses the information of morphemes, i.e., sequences of characters that together contribute a meaning.",
"Hence, FastText (Bojanowski et al., 2017) adopts character n-grams.",
"Such n-grams, however, are highly redundant.",
"For example, if we use substrings of length 3 to 5 to represent the word misspelling , we obtain a list with 24 n-gram characters while only the substrings {mis, spell, ing} are the three crucial units to understand the word.",
"Hence, like BERT, we use WordPiece (Wu et al., 2016) with a vocabulary size of around 30000 to obtain meaningful subwords of the input word.",
"For the word misspelling , this yields { miss , ##pel , ##ling }.",
"However, if we just swap two letters (as by a typo), then the sequence becomes completely different: { mi , ##sp , ##sell , ##ing }.",
"Therefore, we use both the character sequence and subwords (Figure A1).",
"We shrink our vocabulary by stemming all words and keeping only the base form of each word, and by removing words with numerals.",
"This decreases the size of vocabulary from 30 000 to 21 257 without degrading performance too much (Section A.1).",
"Let us now design the function ( ) mentioned in Section 3.1.",
"We are looking for a function that can encode both local features and global features.",
"Local features are character n-grams, which provide robustness against minor variations such as character swaps or omissions.",
"Global features combine local features regardless of their distance.",
"For the word misspelling , a pattern of pre-fix and suffix mis + ing can be obtained by combining the local information at the beginning and the end of the word.",
"Conventional CNNs, RNNs, and self-attention cannot extract such local and global information at the same time.",
"Therefore, we design a new Positional Attention Module .",
"Suppose we have an aforementioned mixed input sequence and a corresponding embedding matrix V R |V| d where d is the dimension of vectors.",
"Then the input can be represented by a list of vectors: X = { x 1 , x 2 , ..., x n } R n d where n is the 3491 mispelling Data Augmentation Encoder Pre-trained Embeddings MaximizeSimilarity misspelling Figure 3: The framework of LOVE with an example of the word misspelling .",
"length of the input.",
"To extract local information, we first adopt positional attention to obtain n-gram features, and then feed them into a conventional self-attention layer to combine them in a global way.",
"This can be written as: X = SA(PA( X )) WO (4) Here, SA is a standard multi-head self-attention and PA is a positional attention.",
"WO R d V d O is a trainable parameter matrix, where d V are the dimensions of values in SA and PA, and d O is that of X .",
"As for the Positional Attention, we adopt absolute sinusoidal embeddings (Vaswani et al., 2017) to compute positional correlations: PA( X ) = Softmax (cid:18) PPT d (cid:19) ( X WV ) (5) Here, P R n d are the position embeddings, and WV R d d V are the corresponding parameters.",
"More details about the encoder are in Appendix C.4.",
"In this section, we focus on the loss function L ( ) .",
"Mimick-like models often adopt the mean squared error (MSE), which tries to give words with the same surface forms similar embeddings.",
"However, the MSE only pulls positive word pairs closer, and does not push negative word pairs apart.",
"Therefore, we use the contrastive loss instead (Equation 3).",
"Wang and Isola (2020) found that the contrastive loss optimizes two key properties: Alignment and Uniformity .",
"The Alignment describes the expected distance (closeness) between positive pairs: (cid:96) align (cid:44) E ( x,y ) p pos ( u x , u y ) (6) Here, p pos is the distribution of positive pairs.",
"Here, p data is the data distribution and t > 0 is a parameter.",
"The two properties are consistent with our expected word representations: positive word pairs should be kept close and negative word pairs should be far from each other, finally scattered over the hypersphere.",
"Our positive word pairs are generated by data augmentation, which can increase the amount of training samples by using existing data.",
"We use various strategies (Figure",
"4) to increase the diversity of our training samples: (1) Swap two adjacent characters, (2) Drop a character, (3) Insert a new character, (4) Replace a character according to keyboard distance, (5) Replace the original word by a synonymous word.",
"The first four augmentations are originally designed to protect against adversarial attacks (Pruthi et al., 2019).",
"We add the synonym replacement strategy to keep semantically similar words close in the embedding space something that cannot be achieved by the surface form alone.",
"Specifically, a set of synonyms is obtained by retrieving the nearest neighbors from pre-trained embeddings like FastText.",
"Negative word pairs are usually chosen randomly from the mini-batch.",
"However, we train our model to be specifically resilient to hard negatives (or difficult negatives) , i.e., words with similar surface forms but different meanings (e.g., misspelling and dispelling ).",
"To this end, we add a certain number of hard negative samples (currently 3 of them) to the mini-batch, by selecting word pairs that are not synonyms and have a small edit distance.",
"Pre-trained Language Models (e.g., ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019)) dynamically generate word representations based on specific contexts, which cannot be mimicked directly.",
"To this end, we have two options: We can either learn the behavior of the input embeddings in BERT before the multi-layer attentions or mimic the static distilled embeddings (Bommasani et al., 2020; Gupta and Jaggi, 2021).",
"We use the BERT as an example to explain these two methods.",
"Suppose we have a subword sequence after applying WordPiece to a sentence: W = { w 1 , w 2 , ..., w n } .",
"For the subword sequence W , BERT first represents it as a list of subword embeddings: E in = { e sub 1 , e sub 2 , ..., e subn } .",
"We refer to this static representation as the Input Embedding of BERT, and we can use our model to mimic the behavior of this part.",
"We call this method mimicking input embeddings .",
"For ease of implementation, we learn only from the words that are not separated into pieces.",
"After that step, BERT applies a multilayer multi-head attention to the input embeddings E in , which yields a contextual representation for each subword: E out = { e out 1 , e out 2 , ..., e outn } .",
"However, these contextual representations vary according to the input sentence and we cannot learn from them directly.",
"Instead, we choose to mimic the distilled static embeddings from BERT, which are obtained by pooling (max or average) the contextual embeddings of the word in different sentences.",
"We call this method mimicking distilled embeddings .",
"The latter allows for better word representations, while the former does not require training on a large-scale corpus.",
"Our empirical studies show that mimicking distilled embeddings performs only marginally better.",
"Therefore, we decided to rather learn the input embeddings of BERT, which is simple yet effective 4.6 Plug and Play One of the key advantages of our model is that it can be used as a plug-in for other models.",
"For models with static word embeddings like FastText, one can simply use our model to generate vectors for unseen words.",
"For models with dynamic word embeddings like BERT, if a single word is tokenized into several parts, e.g. misspelling = { miss , ##pel , ##ling }, we regard it as an OOV word.",
"Then, we replace the embeddings of the subwords by a single embedding produced by our model before the attention layer.",
"Our final enhanced BERT model has 768 dimensions and 16M parameters.",
"Note that the BERT-base model has ~110M parameters and its distilled one has ~550M parameters.",
"There are two main methods to evaluate word representations: Intrinsic and Extrinsic.",
"Intrinsic evaluations measure syntactic or semantic relationships between words directly, e.g., word similarity in word clusters.",
"Extrinsic evaluations measure the performance of word embeddings as input features to a downstream task, e.g., named entity recognition (NER) and text classification.",
"Several studies have shown that there is no consistent correlation between intrinsic and extrinsic evaluation results (Chiu et al., 2016; Faruqui et al., 2016; Wang et al., 2019).",
"Hence, we evaluate our representation by both intrinsic and extrinsic metrics.",
"Specifically, we use 8 intrinsic datasets (6 word similarity and 2 word cluster tasks): RareWord (Luong et al., 2013), SimLex (Hill et al., 2015), MTurk (Halawi et al., 2012), MEN (Bruni et al., 2014), WordSim (Agirre 3493 parameters Word Similarity Word Cluster Avg embedding others RareWord SimLex MTurk MEN WordSim SimVerb AP BLESS FastText (2017) 969M -48.1 30.4 66.9 78.1 68.2 25.7 58.0 71.5 55.9 MIMICK (2017) 9M 517K 27.1 15.9 32.5 36.5 15.0 7.5 59.3 72.0 33.2 BoS (2018) 500M -44.2 27.4 55.8 65.5 53.8 22.1 41.8 39.0 43.7 KVQ-FH (2019) 12M -42.4 20.4 55.2 63.4 53.1 16.4 39.1 42.5 41.6 LOVE 6.3M 200K 42.2 35.0 62.0 68.8 55.1 29.4 53.2 51.5 49.7 Table 2: Performance on the intrinsic tasks, measured as Spearman's and purity for word similarity and clustering.",
"et al., 2009), Simverb (Agirre et al., 2009), AP (Al-muhareb, 2006) and BLESS (Baroni and Lenci, 2011).",
"We use four extrinsic datasets (2 text classification and 2 NER tasks): SST2 (Socher et al., 2013), MR (Pang and Lee, 2005), CoNLL-03 (Sang and De Meulder, 2003) and BC2GM (Smith et al., 2008).",
"It is worth noting that the RareWord dataset contains many long-tail words and the BC2GM is a domain-specific NER dataset.",
"All data augmentations and typo simulations are implemented by NLPAUG 1 .",
"Appendix B provides more details on our datasets and experimental settings.",
"Table 2 shows the experimental results on 8 intrinsic tasks.",
"Compared to other mimick-like models, our model achieves the best average score across 8 datasets while using the least number of parameters.",
"Specifically, our model performs best on 5 word similarity tasks, and second-best on the word cluster tasks.",
"Although there is a gap between our model and the original FastText, we find our performance acceptable, given that our model is 100x times smaller.",
"Table 3 shows the results on four downstream datasets and their corrupted version.",
"In this experiment, we introduce another non-trivial baseline: Edit Distance.",
"For each corrupted word, we find 1 https://github.com/makcedward/nlpaug the most similar word from a vocabulary using edit distance and then use the pre-trained vectors of the retrieved word.",
"Considering the time cost, we only use the first 20K words appearing in FastText (2M words) as reference vocabulary.",
"The typo words are generated by simulating post-OCR errors.",
"For the original datasets, our model obtains the best results across 2 datasets and the second-best on NER datasets compared to other mimick-like models.",
"For the corrupted datasets, the performance of the FastText model decreases a lot and our model is the second best but has very close scores with BoS consistently.",
"Compared to other mimick-like models, our model with 6.5M achieves the best average score.",
"Although Edit Distance can effectively restore the original meaning of word, it is 400x times more time-consuming than our model.",
"In this experiment, we evaluate the robustness of our model by gradually adding simulated post-OCR typos (Ma, 2019).",
"Table 4 shows the performances on SST2 and CoNLL-03 datasets.",
"We observe that our model can improve the robustness of the original embeddings without degrading their performance.",
"Moreover, we find our model can make FastText more robust compared to other commonly used methods against unseen words: a generic UNK token or character-level representation of neural networks.",
"Figure 5 shows the robust-3494 SST2 CoNLL-03 Typo Probability original 10% 30% 50% 70% 90% original 10% 30% 50% 70% 90% Avg Static Embeddings FastText 82.3 68.2 59.8 56.7 57.8 60.3 86.4 81.6 78.9 73.9 70.2 63.4 70.0 FastText + LOVE 82.1 79.8 74.9 74.2 68.8 67.2 86.3 84.7 81.8 77.5 73.1 71.3 76.8 Dynamical Embeddings BERT 91.5 88.2 78.9 74.7 69.0 60.1 91.2 89.8 86.2 83.4 79.9 76.5 80.7 BERT + LOVE 91.5 88.3 83.7 77.4 72.7 63.3 89.9 88.3 86.1 84.3 80.8 78.3 82.1 Table 4: Robust evaluation (five runs of different learning rates) on text classification and NER under simulated post-OCR typos.",
"ness check of different strategies. FastText+LOVE has a consistent improvement on both SST2 and CoNLL-03 datasets. At the same time, LOVE degrades the performance on the original datasets only marginally if at all.",
"We now vary the components in our architecture (input method, encoder and loss function) to demonstrate the effectiveness of our architecture.",
"Input Method. To validate the effect of our Mixed Input strategy, we compare it with two other methods: using only the character sequence or only the subword sequence. Table 5 shows that the Mixed method achieves better representations, and any removal of char or subword information can decrease the performance.",
"Encoder. To encode the input sequence, we developed the Positional Attention Module (PAM), which first extracts ngram-like local features and then uses self-attention combine them without distance restrictions. Table 5 shows that PAM performs the best, which validates our strategy of incorporating both local and global parts inside a word. At the same time, the number of parameters",
"of PAM is acceptable in comparison. We visualize the attention weights of PAM in Appendix C.4, to show how the encoder extracts local and global morphological features of a word.",
"Loss Function. LOVE uses the contrastive loss, which increases alignment and uniformity. Wang and Isola (2020) proves that optimizing directly these two metrics leads to comparable or better performance than the original contrastive loss. Such a loss function can be written as:",
"Here, is a hyperparameter that controls the im-pact of (cid:96) uniform . We set this value to 1.0 because it achieves the best average score on RareWord and SST2. An alternative is to use the Mean Squared Error (MSE), as in mimick-like models. Table 5 compares the performances of these different loss functions. The contrastive loss significantly outperforms the MSE, and there is no obvious improve-3495",
"ment by directly using alignment and uniformity. We also tried various temperatures for the contrastive loss, and the results are shown in Table A3 in the appendix. In the end, a value of = 0 . 07 provides a good performance.",
"Data Augmentation and Hard Negatives. In Table 5, we observe that the removal of our hard negatives decreases the performance, which demonstrates the importance of semantically different words with similar surface forms.",
"LOVE uses five types of word augmentation. We find that removing this augmentation does not deteriorate performance too much on the word similarity task, while it causes a 0.4 point drop in the text classification task (the last row in Table 5), where data augmentations prove helpful in dealing with misspellings. We further analyze the performance of single and composite augmentations on RareWord and SST2 in the appendix in Figure A3 and Figure A4. We find that a combination of all five types yields the best results.",
"As described in Section 4.5, we can mimic the input or distilled embeddings of BERT. After learning from BERT, we use the vectors generated by LOVE to replace the embeddings of OOV subwords. Finally, these new representations are fed into the multi-layer attentions. We call this method the Replacement strategy. To valid its effectiveness, we compare it with two other baselines: (1) Linear",
"Combination (Fukuda et al., 2020). For each subword e sub , the generated vectors of word e word containing the subwords are added to the subword",
"vectors of BERT:",
"e new = (1 ) e sub + e word = sigmoid ( W e sub )",
"where e sub R d is a subword vector of BERT, and e word R d is a generated vector of our model. W R d are trainable parameters. (2) Add . A generated word vector is directly added to a corresponding subword vector of BERT:",
"Table 6 shows the result of these strategies. All of them can bring a certain degree of robustness to BERT without decreasing the original capability, which demonstrates the effectiveness of our framework. Second, the replacement strategy consistently performs best. We conjecture that BERT cannot restore a reasonable meaning for those rare and misspelling words that are tokenized into subwords, and our generated vectors can be located nearby the original word in the space. Third, we find mimicking distilled embeddings performs the best while mimicking input embeddings comes close. Considering that the first method needs training on large-scale data, mimicking the input embeddings is our method of choice.",
"We have presented a lightweight contrastive-learning framework, LOVE, to learn word representations that are robust even in the face of out-of-vocabulary words. Through a series of empirical studies, we have shown that our model (with only 6.5M parameters) can achieve similar or even better word embeddings on both intrinsic and extrinsic evaluations compared to other mimick-like models. Moreover, our model can be added to models with static embeddings (such as FastText) or dynamical embeddings (such as BERT) in a plug-and-play fashion, and bring significant improvements there. For future work, we aim to extend our model to languages other than English.",
"We sincerely thank all the reviewers for their insightful comments and helpful suggestions. This work was partially funded by ANR-20-CHIA-0012-01 (NoRDF)."
] | [
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Evaluating the quality of responses generated by open-domain conversation systems is a challenging task.",
"This is partly because there can be multiple appropriate responses to a given dialogue history.",
"Reference-based metrics that rely on comparisons to a set of known correct responses often fail to account for this variety, and consequently correlate poorly with human judgment.",
"To address this problem, researchers have investigated the possibility of assessing response quality without using a set of known correct responses.",
"Tao et al. (2018) demonstrated that an automatic response evaluation model could be made using unsupervised learning for the next-utterance prediction (NUP) task.",
"For unsupervised learning of such a model, we propose a method of manipulating a golden response to create a new negative response that is designed to be inappropriate within the context while maintaining high similarity with the original golden response.",
"We find, from our experiments on English datasets, that using the negative samples generated by our method alongside random negative samples can increase the model's correlation with human evaluations.",
"The process of generating such negative samples is automated and does not rely on human annotation.",
"1 1 Introduction Automatic evaluation of responses can be difficult because multiple answers could be suitable for a single context.",
"Well-known metrics often used in machine translation or text summarization, such as BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), or ROUGE (Lin, 2004), are based on measuring n-gram overlap with a set of human-annotated golden answers.",
"Compared to machine Corresponding author 1 The code is available at https://github.com/ nlpcl-lab/dialog-eval-hard-negative .",
"This could explain the low correlation between n-gram-based evaluations and human-conducted evaluations for responses generated by conversation systems, as reported by Liu et al. (2016).",
"They also suggested calculating the embedding similarities between responses and correct answers, and showed that these metrics had a higher correlation with human evaluations than n-gram-based metrics.",
"As this method only rewards responses similar to ones in the fixed set of answer candidates, however, it still fails to account for other possible answers that are dissimilar to the known answers.",
"To solve this problem, Lowe et al. (2017) proposed a supervised regression model that makes predictions independent of correct answer candidates.",
"Although they were able to achieve better correlation with human evaluations, their method depends on procuring a human-annotated dataset to learn from.",
"Tao et al. (2018) used the Next-Utterance Prediction (NUP) task to learn for automatic response evaluation.",
"Their model, which is unsupervised, learned to distinguish an appropriate response from random negative samples (responses randomly taken from the training corpus).",
"The model can evaluate the response quality by estimating the probability that the response occurs directly after the dialogue history.",
"They also demonstrated that the probability-based evaluations highly correlated with human evaluations of response quality.",
"In this paper, we propose a method to create a negative sample by manipulating a golden response.",
"The manipulation is carried out in three steps: (1) scoring each word, (2) selecting words to replace, and (3) replacing the selected words.",
"In the first step, each word is assigned a score designed to determine how dependent the word is on the context.",
"In the second step, we select all the words with a score above a threshold value, where higher scores indicate higher dependency to the dialogue history.",
"In the third step, all previously selected words are masked and replaced with words predicted in their place by a pretrained language model (LM).",
"Figure 1 shows an example of a negative sample generated by our method.",
"When \"What's wrong with heading out with Mark for vacation?\" is the golden response, the tokens \"with\" , \"heading\" , \"vacation\" , and \"?\" were selected and replaced with \"?\" , \"Go\" , \"dinner\" , and \".\" , in that order.",
"We find that the model trained with our negative samples alongside random negative samples shows a higher correlation with human evaluations than the models trained only on random negative samples, in experiments using two datasets (Zhao et al., 2020).",
"We also find evidence that automatic evaluation systems trained with the negative samples generated by our proposed method can make decisions closer to human judgment than those without.",
"The contributions of this paper are as follows: (1) We introduce a method that automatically generates negative samples from the golden responses.",
"(2) We show that the negative samples can boost unsupervised learning of an automatic response evaluation model with experiment results.",
"(3) We conducted crowdsourcing and used its results to examine whether the negative samples generated by our method are actually negative.",
"n-gram overlap based metrics such as BLEU, METEOR, and ROUGE show low correlation with human evaluations when used to evaluate the results of an open-domain conversation system.",
"They suggested measuring the similarity by comparing embeddings of a generated response to those of the golden response.",
"Li et al. (2016) explored dialog system with textual feedback.",
"Ghandeharioun et al. (2019) suggested the necessity of interactive human evaluation for dialogue systems, and proposed a self-play scenario to reduce the burden of human effort.",
"Hashimoto et al. (2019) proposed a method to combine human assessments with the predictions of an evaluation model.",
"Lowe et al. (2017) proposed a supervised learning method to predict the quality of a response directly, rather than measuring the similarities with golden responses.",
"Tao et al. (2018) showed that a model trained on the NUP task, in an unsupervised manner, can be used to predict the quality of a response that is generated by a system.",
"Ghazar-ian et al. (2019) improved the previous work by using contextualized word embeddings.",
"Mehri and Eskenazi (2020) proposed two unsupervised evaluation models: one based on masked language modeling (MLM) and another based on the response retrieval task using a pretrained LM.",
"Pang et al. (2020) predicted the coherence and fluency of a response by estimating its likelihood using a LM.",
"Sai et al. (2020) emphasized the importance of adversarial negative samples for learning response evaluation, and released a dataset with human-curated adversarial negative responses.",
"Their negative samples were manually curated, however, whose process can be both time-consuming and expensive.",
"Wu et al. (2020) attempted to improve the performance of evaluation models for abstractive summarization by corrupting the golden summary and using it as a negative sample.",
"In the machine translation task, Sellam et al. (2020) created paired data with synthetic examples, through methods such as back-translation and mask-filling with BERT (Devlin et al., 2019), and they used the paired data to pretrain the evaluation models.",
"Our work introduces a method to create negative samples by manipulating the golden response to the dialogue history, and also suggests that the negative samples generated by the proposed method could be used to improve the unsupervised response evaluation model.",
"The proposed method can be performed automatically without human effort.",
"In this section, we describe our method to generate negative samples.",
"The proposed method creates a negative sample by selecting and replacing specific word(s) in a golden response.",
"The word selection is based on the difference between",
"(a) the estimated probability that a word would appear in the response considering the dialogue history and",
"(b) the estimated probability that the word would appear in the response when the dialogue history is not considered.",
"An LM that can perform MLM can be used to estimate these probabilities.",
"Words that have large differences in probability are selected and replaced with other words.",
"When replacing a word with another word, an LM that can perform MLM can be used to predict the word that is most likely to appear in the position of the original word when the dialogue history is not given.",
"The proposed method includes a scoring process to determine which words in the golden response are affected the most by the dialogue history.",
"The score of a word is calculated by taking the difference between",
"(a) the estimated probability of the word appearing in its position when the dialogue history is given and",
"(b) the estimated probability of the word appearing in its position when the dialogue history is not given.",
"This scoring process is performed independently for all words in the target response.",
"Specifically, to calculate the score of the i -th word ( x i ) in the golden response, we first replace x i with the [mask] token.",
"Then the likelihood that the original word x i appears in place of the masked token is calculated twice: once with the dialogue history and once without.",
"The difference in the log-likelihood is used as the final score of each word, which is defined as score( x i | c, r / i ; ) = log(P( x i | [ c ; r / i ]; )) log(P( x i | r / i ; )) (1) , where x i denotes the word to be scored, and r / i denotes the sequence of words in the golden response where x i is masked.",
"c denotes the dialogue history of the golden response, and [; ] the concatenation of two pieces of text.",
"P ( x i | [ c ; r / i ]; ) denotes the estimated probability that x i would occur when the dialog history is considered.",
"P ( x i | r / i ; ) denotes the estimated probability that x i would occur when Figure 2: An illustration of our proposed method for generating a negative sample.",
"Figure 2 shows an example of our proposed scoring process.",
"The word \"vacation\" in the original response received the highest score among the words in the response.",
"The words \"with\" and \"heading\" also scored higher than other words.",
"For each sentence, we select words that scored higher than the threshold t.",
"For example, in the case seen in Figure 2, if the threshold is 0.5, the words \"with\", \"heading\", \"vacation\" , and \"?\" will be selected.",
"If none of the words receive a score higher than the threshold value, no words will be selected, and in this case, a negative sample cannot be generated.",
"We set the threshold t to 0.5 for our experiments.",
"Using this threshold in our dataset, an average of 27.28% of tokens were selected for each response.",
"Also, 94.89% of the responses contained at least one selected word, which means a negative sample could be generated for 94.89% of the cases.",
"The selected words are then replaced using an LM.",
"All selected words are replaced with [mask] tokens in the original response.",
"Then the LM predicts, without considering the dialogue history, the words that are most likely to occur in the location of each masked word.",
"If the LM predicts the original word, the second most likely word is used instead.",
"4.1.1 Dataset To measure the correlation between model predictions and human evaluations, we use the response-evaluation dataset proposed by Zhao et al. (2020).",
"The dataset contains dialogue histories, machine-generated responses, golden responses, and appropriateness scores evaluated by human annotators.",
"The scores were on a 5-point Likert scale, and each response was scored by four annotators.",
"Six generative models, S2S (Sutskever et al., 2014), attentional S2S, HRED (Serban et al., 2016), VHRED (Serban et al., 2017), GPT2-sm and GPT2-md (Wolf et al., 2018), with three decoding algorithms, greedy decoding, ancestral decoding, and nucleus sampling (Holtzman et al., 2020), were used to generate the responses.",
"They used DailyDialog (Li et al., 2017) and PersonaChat (Zhang et al., 2018).",
"For each dataset, they trained a set of generative conversation models.",
"Each of the 900 context-response pairs was randomly selected from the test set of the two datasets, and the annotators evaluated the appropriateness of each response to the context to construct two different evaluation datasets.",
"The Krippendorff's alpha for this dataset was 0.815, suggesting reasonable inter-annotator agreement.",
"DailyDialog dataset consists of 13,118 multiturn open-domain conversations written by human workers, and PersonaChat dataset consists of 12,875 multi-turn open-domain conversations written by human workers.",
"The evaluation models used in the experiment are listed below.",
"Among them, BLEU, ROUGE, METEOR, Embedding Average/Extrema/Greedy, and BERTScore are reference-based metrics that evaluate the quality of a response based on its similarity to the golden response.",
"BERT-MLM, GPT2-coherence, BERT-retrieval (random-N), BERT-retrieval (ours) are unreferenced metrics that do not require golden responses.",
"RUBER can be viewed as a hybrid metric that includes both reference-based and unreferenced approaches.",
"Some of the reference-based metrics are simple comparison methods, rather than trainable models, but are presented along with other models because they can also be used to estimate the quality of responses.",
"It should be noted that we do not compare the unsupervised approaches listed below with supervised approaches, such as the ones proposed by Lowe et al. (2017); Zhao et al. (2020), which require human-annotated response-evaluation pairs for training.",
"BLEU is a widely used metric for the machine translation task by measuring n-gram precision between multiple references and a hypothesis (Pap-ineni et al., 2002).",
"ROUGE is a widely used metric for text summarization, which measures the n-gram recall (Lin, 2004).",
"We use the F-score of ROUGE-L as an appropriateness score.",
"METEOR is a metric for the machine translation task, which considers both n-gram precision and n-gram recall of a hypothesis (Banerjee and Lavie, 2005).",
"Embeddding Average/Greedy/Extrema calculate the similarity between golden and generated responses using the embedding similarity to account for the diverse ways in which the golden response could be stated (Liu et al., 2016).",
"BERTScore is a recently proposed unsupervised metric based on the contextualized BERT embeddings (Zhang et al., 2020).",
"RUBER calculates the scores of reference-based and unreferenced metrics individually, then uses them to predict the final score (Tao et al., 2018).",
"The reference-based metric measures the similarity between golden responses and generated responses based on their embedding similarity.",
"The unreferenced metric is trained on the NUP task.",
"BERT-MLM sums the log-likelihood of each token in a response after masking it using an LM that is fine-tuned on a corpus, then uses the aggregated likelihood as the final score of the response (Mehri and Eskenazi, 2020).",
"GPT2-coherence measures the coherence between the dialogue history and a response by using a fine-tuned GPT2 model (Radford et al., 2019) to compute the averaged log-likelihood of the response (Pang et al., 2020).",
"BERT-retrieval (random-N) is a BERT-based model that is trained to distinguish a golden response from a negative sample (Mehri and Eskenazi, 2020), using the dialogue history.",
"We refer to the original model by Mehri and Eskenazi (2020) as BERT-retrieval (random-1) since they used one random response as a negative sample, for a dialogue history.",
"We refer to a variation of the model that uses two random negative samples for a dialogue history, as BERT-retrieval (random-2).",
"This is to fairly compare with our model, which uses two negative samples for a dialogue history, as explained below.",
"BERT-retrieval (ours) is a model that has the same structure as the BERT-retrieval model.",
"The difference is that our model utilizes the negative samples generated by the method that we propose.",
"The model uses both the generated negative samples and the random negative samples.",
"Specifically, during training, the model learns to distinguish a golden response from two negative samples: one generated from our method and one randomly sampled from the corpus.",
"We trained the unreferenced models on the original DailyDialog dataset, and then evaluated them on the two response-evaluation datasets (Sec-tion 4.1.1).",
"We split the conversations in the DailyDialog dataset in a sliding window manner to construct pairs of dialogue histories and corresponding responses.",
"The maximum turn of the dialogue history was set to 5, following Zhao et al. (2020).",
"We use the pretrained BERT and GPT2 released by Wolf et al. (2018) for all of our relevant experiments.",
"2 A BERT model, fine-tuned on the DailyDialog train set with MLM for 1 epoch, was used for the scoring step of our proposed method (Section 3.1).",
"The same model was used for the replacing step (Section 3.3).",
"We used the threshold 3 of 0.5 for the selecting step (Section 3.2).",
"We used Adam optimizer (Kingma and Ba, 2015) for training.",
"We searched for hyperparameters for the BERT-retrieval (random-1) model, that maximize the (Pearson) correlation between human evaluations and model predictions on the response-evaluation dataset made from DailyDialog dataset 2 bert-base-uncased and gpt2-12layer are used.",
"3 We tested threshold values of 0, 0.5, 1, and 2, and found that using 0.5 as the threshold achieved the highest correlation with human evaluations; therefore we report only the experiment results with this value.",
"(Section 4.1.1).",
"The values found in this search (epoch=3, batch size=64, and learning rate=2e-5) were used for all the BERT-retrieval models (random-N, ours).",
"The random seed was fixed for all experiments.",
"In Section 4.2.1, we check the correlations between the results of each evaluation model and human evaluations.",
"In Section 4.2.2, an in-depth analysis of our proposed method is shown.",
"In Section 4.2.3 we present examples that may suggest that automatic evaluation systems that have been trained with the proposed method can make deci-Figure 3: The scatter plots that show in detail the correlations between model predictions and human evaluations.",
"Each of the plots contains 800 system-generated responses in the response-evaluation dataset made from DailyDialog dataset (Section 4.1.1).",
"Each point indicates a response.",
"Its x-value indicates the human evaluation score for the quality of the response, given on a 5-point Likert scale.",
"Its y-value indicates the model prediction for the quality of the response, normalized into the range of [0, 1].",
"The orange line is a linear regression.",
"We add a noise sampled from N (0 , 0 . 09) into human score for better visualization, following previous studies (Lowe et al., 2017; Bak and Oh, 2020; Pang et al., 2020).",
"Table 1 shows the correlation between model predictions and human evaluations for each model, based on the two datasets.",
"Pearson correlation ( r ) and Spearman's rank correlation coefficient ( ) were used to measure the correlation between human score and model prediction.",
"It should be noted that we excluded the scores of golden responses from the response-evaluation datasets and extracted 800 and 750 response-evaluation pairs from the DailyDialog and PersonaChat datasets, respectively.",
"The model incorporating our negative sample method made predictions with higher correlation with human evaluations than the predictions made by BERT-retrieval (random-2), which uses the same number of negative samples for training.",
"Among the baseline models, most of the reference-based metrics showed comparatively low performances.",
"It is thought that these results support the observations made by previous studies suggesting that using the golden response as the one and only correct answer to evaluate responses can be ineffective.",
"RUBER showed better performance than other reference-based models for the DailyDialog dataset, but showed low performance in evaluating PersonaChat responses.",
"The GPT2-coherence model showed similar performance to the BERT-retrieval (random-1) model on the DailyDialog dataset, but relatively low performance in the PersonaChat dataset.",
"It should also be noted that the hybrid and unreferenced models were trained on the DailyDialog dataset, and not on the PersonaChat dataset.",
"Figure 3 shows a scatter plot visualizing the human scores and model predictions for the response-evaluation dataset on DailyDialog.",
"BLEU tended to predict low scores.",
"This may suggest that there were only a few n-gram overlaps between the golden responses and the generated responses.",
"The predictions of embedding-based metrics (Emb. Greedy and BERTScore) were concentrated on a specific range, and showed low correlation with human scores.",
"The unreferenced or hybrid metrics (RUBER, BERT-MLM, GPT2-coherence, and BERT-retrieval (random-1)) show relatively higher correlations than the reference-based metrics.",
"We can see that BERT-retrieval (ours) shows the greatest correlation among the models, with a correlation coefficient of 0.1974.",
"The scatter plots suggest that false-positive predictions, which frequently occurred in the BERT-retrieval (random-1) predictions, occurred less frequently in our model's predictions.",
"However, the scatter plot for our model has a step-function-like appearance.",
"Most of the responses received a score near 0 or near 1, and this is problematic because an ideal model should be able to match human scores even when the scores are moderate.",
"This tendency is considered as a limitation of our model that must be addressed in the future work.",
"We analyze our model, by performing experiments with some variations in making the negative samples to be used with the random negative sample: (1) drop-golden : Instead of following the steps of scoring, selecting, and replacing, we randomly drop some of the words in the golden response to create a negative sample, and use it with the random negative sample.",
"(2) shuffle-golden : Instead of following the three steps, we randomly shuffle the words in the golden response to create a negative sample, and use it with the random negative sample.",
"(3) score-w/o-history : We use the scoring function in Equation 1 without the first term, so that it only considers the probabilities within the sentence without the dialogue history.",
"(4) select-random : Instead of using the scoring function proposed in Equation 1, we randomly select the words to be replaced.",
"(5) replace-w-history : When replacing a word, we concatenate the dialogue history with the response so that the LM considers the dialogue history when replacing the masked words.",
"Table 2 shows the correlations between model predictions and human evaluations for the modified models above.",
"Dropping or shuffling words in the golden response to make a negative sample shows similar or lower performance compared to using random responses (BERT-retrieval (random-1, random-2)).",
"The correlation was lower when the dialogue history was not considered in the scoring process than when it was considered.",
"We speculate that this is because it gives high scores not only to words important for the consistency of a conversation, but also to the words with low likelihoods in general.",
"Randomly selecting the tokens shows lower correlation than using our proposed scoring function.",
"Considering the dialogue history in the replacing process gives lower performance than when it is not considered.",
"We speculate that providing the dialogue history makes predictions on the masked words that are more appropriate to the context, making the reconstructed response less Figure 4: Some examples of cases in which our model predicted scores similar to human evaluations.",
"Figure 4 shows some of the evaluation results of each model on the DailyDialog dataset.",
"The responses in the first and second examples are appropriate to the given dialogue history as suggested by the high human score.",
"BLEU-2 gives a score of 0 .",
"because the response has no bi-grams shared with the golden response.",
"RUBER and GPT2-coherence did not recognize the utterances as appropriate responses.",
"BERT-retrieval (random-1) and BERT-retrieval (ours) gave relatively high scores to the responses, evaluating them as appropriate utterances.",
"In the third example, the system response appears to be somewhat relevant to the given context because it includes some words ( \"chance\", \"future\" ) relevant to the phrase \"take part in the finals\" .",
"A repetition of a phrase in this example ( \"to get a chance\" ) is believed to have contributed to the low human evaluation score (0.12).",
"The RUBER and BERT-retrieval (random) models appear to lack this intuition, and instead evaluate the response as appropriate, possibly because some words appear relevant.",
"Our proposed model scored the response with a relatively low score of 0.15, which was close to the human score.",
"In the fourth example, the response is not coherent, but because it begins with a sentence \"Let me get a peek\" , it could have appeared as a coherent response to the previous dialogue about parking tickets.",
"For this case, our proposed model and GPT2-coherence gave scores similar to human scores.",
"We compute the Part-of-Speech (POS) tag distribution of selected words by our method and compare it with the original distribution of the DailyDialog corpus (Figure 5).",
"4 As we can see, the VERB and NOUN tags are the most frequently selected (21.9% and 20.5%, respectively), and their ratio is increased than in the original corpus (18.3% and 16.7%, respectively).",
"Meanwhile, the ratio of punctuation tag (.) is highly decreased (from 21.3% to 4 We use the NLTK POS tagger ( https://www.nltk. org/book/ch05.html ) with universal tagset.",
"To see whether the negative samples generated by our method are actually inappropriate, we conducted a survey through Amazon Mechanical Turk (AMT).",
"We selected 40 dialogue history examples and prepared three types of responses for each dialogue: 1) the golden response, 2) a negative sample generated by our method, and 3) a randomly selected negative sample from the corpus.",
"For each dialog, 4 annotators were asked to score the quality of the three responses.",
"Following Lowe et al. (2017), we asked the question How appropriate is the response overall? for each context-response pair, and the evaluation was conducted on a 5-point Likert scale.",
"The Fleiss' kappa and Krippendorff's alpha for the annotations were 0.63 and 0.63, respectively.",
"Figure 6 shows the survey results.",
"The mean scores of golden and random responses were 4.65 and 1.19, respectively.",
"The mean score of our negative samples was 2.51.",
"The standard deviations for the scores of each response type were 0.67, 1.27, and 0.41 for the golden response, our negative sample, and the random response, respectively.",
"We see that these results do not guarantee that all the generated negative samples are inappropriate.",
"What we can assume, however, is that our method of manipulating a golden response generates a negative sample that is more inappropriate than the golden response.",
"Table 3 shows two examples of the three different types of responses for a given dialog history with their survey results.",
"Dialog History A : Sir, would you like some dessert now?",
"B : Please show me the menu again.",
"A : Here you are sir.",
"The chocolate cake is very delicious.",
"Responses Golden : No, thanks.",
"I don't like chocolate.",
"I'd like strawberry pie.",
"(5) Ours : No, thanks.",
"I don't have chocolate.",
"I'll like some one.",
"(1.5)",
"Random : I basically believe in science over theology.",
"I mean , I (...) (1) Dialog History A: Could you tell me something about your family ?",
"Responses Golden : Ok.",
"There are five people in my family, father, mother, elder brother, younger sister and I. (5) Ours : Ok.",
"There are five children in my family, father, mother, and brother, and father my me. (3.25) Random : When do you want to move in?",
"(1.25)",
"For a model learning to find the difference between appropriate and inappropriate responses, we speculate that the task of distinguishing the negative samples generated by our method from the golden responses would be more difficult than the task of distinguishing the randomly selected negative samples from the golden responses.",
"We believe that this is because the generated negative samples can be inappropriate in more subtle ways than completely unrelated responses are.",
"We suspect that learning with this more challenging setting have resulted in the performance gain that we discussed in Section 4.2.1.",
"However, we believe that it will need a more in-depth semantic analysis on each of the cases, such as performing a more quantitative analysis (through an extensive human study, for instance) and further interpretation of the semantic relationships between the original golden responses and the modified negative samples according to the proposed method.",
"We leave it as a future work.",
"In this paper, we proposed an automatic method for generating negative samples that can be used to train an unsupervised and unreferenced response evaluation model.",
"We performed experiments to demonstrate that the proposed method can boost the unsupervised training of a response evaluation model.",
"We analyzed the experiment results quantitatively, and examined some examples that show the distinct characteristics of our proposed method.",
"This work was supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government MSIT) (No. 2018-0-00582, Prediction and augmentation of the credibility distribution via linguistic analysis and automated evidence document collec-tion)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other"
] |
[
"Pre-trained visually grounded language models such as ViLBERT, LXMERT, and UNITER have achieved significant performance improvement on vision-and-language tasks but what they learn during pre-training remains unclear.",
"In this work, we demonstrate that certain attention heads of a visually grounded language model actively ground elements of language to image regions.",
"Specifically, some heads can map entities to image regions, performing the task known as entity grounding .",
"Some heads can even detect the syntactic relations between non-entity words and image regions, tracking, for example, associations between verbs and regions corresponding to their arguments.",
"We denote this ability as syntactic grounding .",
"We verify grounding both quantitatively and qualitatively, using Flickr30K Entities as a testbed.",
"Recently, BERT (Devlin et al., 2019) variants with vision such as ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019), and UNITER (Chen et al., 2019) have achieved new records on several vision-and-language reasoning tasks, e.g. VQA (Antol et al., 2015), NLVR 2 (Suhr et al., 2019), and VCR (Zellers et al., 2019).",
"These pre-trained visually grounded language models use Transformers (Vaswani et al., 2017) to jointly model words and image regions.",
"They are pre-trained on paired image-text data, where given parts of the input the model is trained to predict the missing pieces.",
"Despite their strong performance, it remains unclear if these models have learned the desired cross-modal representations.",
"Conversely, a large body of work (Liu et al., 2019; Tenney et al., 2019; Clark et al., 2019) has focused on understanding the internal behaviours of pre-trained language models (Peters et al., 2018b; Radford et al., 2018; Devlin et al., 2019) and find that they capture linguistic features such as POS, syntactic structures, and coreferences.",
"This inspires us to ask: what do visually grounded language models learn during pre-training?",
"Following Clark et al. (2019), we find that certain attention heads of a visually grounded language model acquire an intuitive yet fundamental ability that is often believed to be a prerequisite for advanced visual reasoning (Plummer et al., 2015): grounding of language to image regions.",
"We first observe that some heads can perform entity grounding , where entities that have direct semantic correspondences in the image are mapped to the correct regions.",
"For example, in Figure 1, the word man attends to the person on the left of the image.",
"Further, non-entity words often attend to image regions that correspond to their syntactic neighbors and we call this syntactic grounding .",
"For example, wearing is attending to its subject, the man in the image.",
"We argue that syntactic grounding actually complements entity grounding and that it is a natural byproduct of cross-modal reasoning.",
"For example, to ground man to the person on the left rather than other pedestrians, the model needs to identify the syntactic relationships among man, wearing, white, and shirt and ground shirt and man subsequently.",
"During such process, it is helpful and natural that wearing attends to the man in the image.",
"We verify such phenomena by treating each attention head as a ready-to-use classifier (Clark et al., 2019) that given an input word, always outputs the most-attended-to image region.",
"Using Flickr30K Entities (Plummer et al., 2015) as a test bed, we demonstrate that certain heads could perform entity and syntactic grounding with an accuracy significantly higher than a rule-based baseline.",
"Further, higher layers tend to have higher grounding accuracy, suggesting that the model is Man Shirt Sidewalk Pedestrians Sidewalk* Layer 3 Layer 4 Layer 5 Layer 6 Layer 10 Layer 11 Figure 1: Attention weights of some selected heads in a pre-trained visually grounded language model.",
"refining its understanding of vision and language layer by layer.",
"Additionally, we provide a qualitative analysis exemplifying these phenomena.",
"A long version of this paper is at https://arxiv.",
"org/abs/1908.03557 .",
"Our code is available at https://github.com/uclanlp/visualbert .",
"Several pre-trained visually grounded models have been proposed recently, and they are conceptually similar yet vary in design details, making evaluating them complicated and difficult.",
"Thus for simplicity, we propose a simple and performant baseline, VisualBERT (see Figure 2), and base our analysis on this model.",
"We argue that our analysis on VisualBERT can be generalized to other similar models as all these models share the following two core ideas: (1) image features extracted from object detectors such as Faster-RCNN (Ren et al., 2015) are fed in a Transformer-based model along with text; (2) the model is pre-trained on image-text data Task Baseline VisualBERT VQA 68.71 70.80 VCR 44.0 52.4 NLVR 2 53.5 67.3 Flickr30K 69.69 71.33 Table 1: Performance of VisualBERT on four benchmarks.",
"and leave details to the Appendix A. Input to VisualBERT includes a text segment and an image.",
"The image is represeted as a set of visual embeddings, where each embedding vector corresponds to a bounding region in the image, derived from an object detector (Ren et al., 2015).",
"Text and visual embeddings are then passed through multiple Transformer layers to build joint representations.",
"VisualBERT is pre-trained on the COCO dataset (Chen et al., 2015), concisting of around 100K images with 5 captions each.",
"We use two objectives for pre-training.",
"(1) Masked language modeling with the image.",
"Some elements of text input are masked and the model learns to predict the masked words based on the remaining text and visual context.",
"(2) Sentence-image prediction.",
"For COCO, where there are multiple captions corresponding to one image, we provide a text segment consisting of two captions.",
"One of the caption is describing the image, while the other has a 50% chance to be another corresponding caption and a 2 4 6 8 10 12 Layer 0.1 0.2 0.3 0.4 0.5 G r ound i ng A cc Figure 3: Entity grounding accuracy of the attention heads organized by layer.",
"50% chance to be a randomly drawn caption.",
"The model is trained to distinguish these two situations.",
"Extensive experiments on four vision-and-language datasets (Goyal et al., 2017; Zellers et al., 2018; Suhr et al., 2019; Plummer et al., 2015) verify that pre-trained VisualBERT exceeds all comparable baselines significantly.",
"A summary of the results is present in Table 1.",
"See the Appendix B for details.",
"Some of the afore-mentioned pre-trained visually grounded language models use additional pre-training data or parameters and achieve better performance.",
"As this paper focuses on the analysis, we do not focus on comparing the performance of VisualBERT and other similar models.",
"For the rest of the paper, we analyze a VisualBERT that is con-figured the same as BERT Base with 12 layers and 144 self-attention heads in total.",
"The model is pre-trained on COCO.",
"To mitigate the domain difference between the diagnostic dataset Flickr30K and COCO, we perform additional pre-training on the training set of Flickr30K with the fore-mentioned masked language modeling objective with the image.",
"Entity Grounding We first focus on entity grounding and use the validation set of Flickr30K Entities for evaluation.",
"The dataset contains image-caption pairs and annotates the entities in the captions and the corresponding image regions.",
"For each annotated entity and for each attention head of VisualBERT, we take the bounding region which receives the most attention weight as the prediction.",
"An entity could attend to not only the image regions Type Baseline Acc Head det 19.59 54.01 10-1 pobj 17.34 32.82 11-11 amod 18.67 45.96 10-9 nsubj 23.19 44.64 5-1 prep 20.61 49.27 9-11 dobj 9.82 30.24 11-11 punct 23.32 48.80 3-6 partmod 21.41 38.15 4-9 nn 16.33 34.06 10-9 num 23.15 67.44 9-11 Table 2: The best performing heads on grounding 10 most common dependency relationships.",
"but also other words in the text.",
"For this evaluation, we regard the image region that receives the most attention weight compared to other image regions as the prediction, without considering other words in the text.",
"The predicted region is considered correct as long as it overlaps with the gold bounding region with a IoU 0.5 (Kim et al., 2018).",
"We also consider a rule-based baseline that always chooses the region with the highest detection confidence.",
"We report the accuracy for all 144 attention heads in VisualBERT and the baseline in Figure",
"3. Despite that some heads are accurate at entity grounding, they are not actively attending to the image regions.",
"For example, a head might be allocating 10% of its attention weights to all image regions, but it assigns the most of the 10% weights to the correct region.",
"We regard heads paying on average more than 20% of its attention weights from the entities to the regions as actively paying attention to the image and draw then as dark and large dots, while the others are drawn as light and small dots.",
"We make the following two observations.",
"First, certain heads perform entity grounding with a remarkably high accuracy .",
"This is consistent with the observations in Clark et al. (2019) and Voita et al. (2019) that the attention heads specialize in different things.",
"The best of all heads even achieves a high accuracy of 50.77 compared to the baseline 17.33.",
"Further, the grounding accuracy peaks in higher layers .",
"This resembles what Tenney et al. (2019) find, in that BERT also refines its understanding of the input over the layers.",
"image regions could also be helpful for visual reasoning.",
"More specifically, if two words are connected with a dependency relation, w 1 r w 2 , and w 1 is an entity aligned to an image region, we would like to know how often the attention heads attend from w 2 to the regions corresponding to w 1 .",
"For evaluation, we parse all sentences in the validation set of Flickr30K using AllenNLP (Dozat and Manning, 2017; Gardner et al., 2018) and use the parser output as the gold parsing annotation.",
"We find that for each dependency relationship, there exists at least one head that significantly outperforms guessing the most confident bounding region.",
"We report the 10 most common relations in Table 2 and plot the syntactic grounding accuracy of three particularly interesting dependency relationships in Figure",
"4. Similar to what we observe for entity grounding, the model becomes more accurate on syntactic grounding in higher layers.",
"Finally, we showcase several interesting examples of how VisualBERT performs grounding in Figure 1 and Figure",
"5. To generate these examples, for each ground-truth box, we show a predicted bounding region closest to it and manually group the bounding regions into different categories.",
"We also include regions that the model is actively attending to, even if they are not present in the gold annotations (marked with an asterisk).",
"We then aggregate the attention weights from words to those regions in the same category.",
"We show the best heads of 6 layers that achieve the highest entity grounding accuracy but we find that they also exhibit a certain level of syntactic grounding.",
"We observe the same behaviours as in the quantitative analysis, in that VisualBERT not only performs grounding but also refines its predictions through successive Transformer layers.",
"For example, in the bottom image in Figure 5, initially the word husband and the word woman both assign significant attention weight to regions corresponding to the woman.",
"By the end of the computation, VisualBERT has disentangled the woman and man, correctly aligning both.",
"Furthermore, there are many examples of syntactic alignments.",
"In the same image, the word teased aligns to both the man and woman while by aligns to the man. 4 Related Work There is a long research history of bridging vision and language (Chen et al., 2015; Antol et al., 2015; Zellers et al., 2019) with the lasted advances being visually grounded language models (Lu et al., 2019; Alberti et al., 2019; Li et al., 2019; Su et al., 2019; Tan and Bansal, 2019; Chen et al., 2019).",
"However, little analysis has been done on understanding what vision-and-language models learn.",
"Previous works on VQA and image captioning (Yang et al., 2016; Anderson et al., 2018; Kim et al., 2018) have only shown qualitative examples on the grounding ability of the models, while another line of work focuses on designing dedicated models for the entity grounding task (Xiao et al., 2017; Datta et al., 2019).",
"We, however, present a quantitative study on whether visually grounded language models acquire the grounding ability during pre-training without explicit supervision.",
"Our work is inspired by papers on analyzing pre-trained language models.",
"One line of work uses probing tasks to study the internal representations (Peters et al., 2018a; Liu et al., 2019; Tenney et al., 2019) while another studies the attention mechanism (Clark et al., 2019; Voita et al., 2019; Koval-eva et al., 2019).",
"We follow the latter but we believe the grounding behaviour could also be probed in the internal representations of VisualBERT.",
"We have presented an analysis on the attention maps of VisualBERT, a proposed visually grounded language model.",
"We note that the grounding behaviour we have found is linguistically inspired, as entity grounding can be regarded as cross-modal entity coref resolution while syntactic grounding can be regarded as cross-modal parsing.",
"Moreover, VisualBERT exhibits a hint of cross-modal pronoun resolution, as in the bottom image of Figure 5, the word her is resolved to the woman.",
"For future work, it would be interesting to see if more linguistically-inspired phenomena can be systematically found in cross-modal models.",
"We would like to thank Xianda Zhou for help with experiments as well as Patrick H. Chen, members of UCLA NLP, and anonymous reviewers for helpful comments.",
"We also thank Rowan Zellers for evaluation on VCR and Alane Suhr for evaluation on NLVR 2 .",
"Cho-Jui Hsieh acknowledges the support of NSF IIS-1719097 and Facebook Research Award.",
"This work was supported in part by DARPA MCS program under Cooperative Agreement N66001-19-2-4032.",
"The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Recently, large-scale datasets have vastly facilitated the development in nearly all domains of Natural Language Processing.",
"However, there is currently no cross-task dataset in NLP, which hinders the development of multi-task learning.",
"We propose MATINF , the first jointly labeled large-scale dataset for classification, question answering and summarization.",
"MATINF contains 1.07 million question-answer pairs with human-labeled categories and user-generated question descriptions.",
"Based on such rich information, MATINF is applicable for three major NLP tasks, including classification, question answering, and summarization.",
"We benchmark existing methods and a novel multi-task baseline over MATINF to inspire further research.",
"Our comprehensive comparison and experiments over MATINF and other datasets demonstrate the merits held by MATINF .",
"1 1 Introduction In recent years, large-scale datasets (e.g., Ima-geNet (Deng et al., 2009) and SQuAD (Rajpurkar et al., 2016)) have inspired remarkable progress in many areas like Computer Vision (CV) and Natural Language Processing (NLP).",
"On the one hand, well-annotated data provide essential information for training supervised machine learning models.",
"On the other hand, benchmarked datasets make it possible to evaluate and compare the capability of different methods on the same stage.",
"Due to the high cost of data annotation, existing NLP datasets are usually labeled for only one particular task (e.g., SQuAD (Rajpurkar et al., 2016) for question answering, CNN/DM (Hermann et al., The first two authors contribute equally to this paper. Chenliang Li is the corresponding author. 1 The implementation of MTF-S2S and information about obtaining access to the dataset can be found at https:// github.com/WHUIR/MATINF . 2015) for summarization and AGNews (Zhang et al., 2015) for text classification).",
"These single-task datasets hinder the development of learning common and task-invariant knowledge (Liu et al., 2017).",
"Although multi-task learning and transfer learning have delivered encouraging results, we still cannot determine whether the improvement is from the extension of input or supervision.",
"Furthermore, task-specific data make the models tend to learn task-specific leakage features (Zhang et al., 2019) rather than meaningful knowledge that could generalize to other tasks.",
"However, as a key step to Artificial General Intelligence (AGI), knowledge acquisition requires the model to learn more general knowledge instead of overfitting on a specific task.",
"Therefore, a large-scale and cross-task dataset is in huge demand for future NLP research.",
"Nevertheless, to the best of our knowledge, none of the existing datasets could meet such demand.",
"In this paper, we propose Mat ernal and Inf ant Dataset (MATINF ), the first large-scale dataset covering three major NLP tasks: text classification, question answering and summarization.",
"MATINF consists of question answering data crawled from a large Chinese maternity and baby caring QA site.",
"On this site, users can ask questions related to maternity and baby caring.",
"When submitting a question, a detailed description is required to provide essential information and the asker also needs to assign a category for this question from a pre-defined topic list.",
"Each user could submit an answer to a question post, and the asker will select the best answer out of all the candidates.",
"To attract more attention, the askers are encouraged to set rewards using virtual coins when submitting the question and these coins will be given to the user who submitted the best answer selected by the asker.",
"This rewarding mechanism could constantly ensure high-quality answers.",
"MATINF supports three NLP tasks as follows.",
"Text Classification.",
"Given a question and its detailed description, the task is to select an appropriate category from the fine-grained category list.",
"Different from previous news classification tasks whose category set is general topics like entertainment and sports, MATINF-C is a fine-grained classification under a single domain.",
"That is, the distance between different categories is smaller, which provides a more challenging stage to test the continuously evolving state-of-the-art neural models.",
"Question Answering.",
"Given a question, the task is to produce an answer in natural language.",
"This task is slightly different from previous Machine Reading Comprehension (MRC) since the document which contains the correct answer is not directly provided.",
"Therefore, how to collect the domain knowledge from massive QA data becomes extremely important.",
"Summarization.",
"Given a question description, the task is to produce the corresponding question.",
"Previous summarization datasets are all constructed with news or academic articles.",
"The limited text genres covered in these datasets hinder the thor-ough evaluation of summarization models.",
"Also, the noisy nature of MATINF encourages more robust models.",
"MATINF can be considered as the first social media summarization dataset.",
"MATINF holds the following merits: (1) Large .",
"MATINF includes 1.07M unique QA pairs, making it an ideal playground for the new advancements of deeper and larger models (e.g., Pretrained Language Models).",
"(2) Multi-task applicable .",
"MATINF is the first dataset that simultaneously contains ground truths for three major NLP tasks, which could facilitate new multi-task learning methods for these tasks.",
"Here, to set a baseline and inspire future research, we present M ultit ask F ield-shared S equence to S equence (MTF-S2S), a straightforward yet effective model, which achieves better performance on all three tasks compared to its single-task counterparts.",
"Topic classification is one of the most fundamental tasks in NLP.",
"As a deeply explored task, many datasets have been used in previous research both in English (AGNews, DBPedia, Yahoo Answer (Zhang et al., 2015), TREC (Voorhees and Tice, 1999)) and Chinese (THUCNews (Sun et al., 2016), SogouCS (Wang et al., 2008a), Fudan Corpus, iFeng and ChinaNews (Zhang and LeCun, 2017)).",
"These datasets were useful and indispensable in the past decades to test the performance of different kinds of classifiers.",
"However, as most of them are formal text and the target categories are general topics, even simply leveraging n-gram features could achieve acceptable results.",
"Plus, some of them are small in scale.",
"Nowadays, with the prevalence of neural models and pretraining techniques, recent algorithms (Sun et al., 2018; Wu et al., 2019) are approaching the ceiling of these datasets with accuracy scores up to 98% .",
"Different from any of the existing datasets, MATINF is more challenging, providing a new stage to test the performance of future algorithms.",
"Following the definition in (Jurafsky and Martin, 2009), Question Answering (QA) can be generally divided into Information Retrieval (IR) based Question Answering and Knowledge-based Question Answering.",
"For IR-based Question Answering, the answer is often a span in the retrieved document.",
"As for Knowledge-based Question Answering, a human-constructed knowledge base is provided for querying and the answer is in the form of a query result.",
"Recently, Open Domain QA (Chen et al., 2017) has been recognized as a new genre where a natural language response instead of text spans is returned as an answer.",
"Currently, several datasets are available for Chinese Question Answering.",
"NLPCC Shared Task (Duan and Tang, 2017) provided two datasets for IR-based and Knowledge-based QA, respectively.",
"DuReader (He et al., 2018) is an Open Domain dataset derived from user search logs and provided with human-picked documents as evidence.",
"Zhang and Zhao (2018) provided a QA dataset in the domain of Chinese College Entrance Test history exam questions, with documents from standard history textbooks.",
"Different from these datasets, instead of providing pre-defined documents as evidence, MATINF-QA only contains sufficient QA pairs in the training set.",
"In this way, there are various approaches to exploit these questions as evidence.",
"Thus, MATINF-QA encourages innovations in retrieval, generation and hybrid question answering methods.",
"Summarization datasets can be roughly categorized into extractive and abstractive datasets, which respectively favor abstractive and extractive methods.",
"Extractive datasets are composed of long documents and summaries.",
"Since the summary is long, extracted sentences and spans from the document could compose a good summary.",
"Newsroom (Grusky et al., 2018), ArXiv and PubMed (Cohan et al., 2018) and CNN / Daily Mail dataset (Hermann et al., 2015) are commonly used extractive datasets.",
"Abstractive datasets often contain short documents and summaries, which encourages a thor-ough understanding of the document and style transfer between a document and its corresponding summary.",
"Gigaword (Napoles et al., 2012) and Xsum (Narayan et al., 2018) fall into this category.",
"Also, the abstractive dataset LCSTS (Hu et al., 2015), crawled from verified short news feeds of major newspapers and televisions, is the only public dataset for Chinese text summarization to date.",
"However, all of these existing datasets are composed of either news or academic articles.",
"The narrow sources of these datasets bring two main drawbacks.",
"First, due to the nature of news reporting and academic writing, the summary-eligible contents do not distribute uniformly (Sharma et al., 2019).",
"Second, models evaluated on these noiseless formal-text datasets are not robust enough for real-world applications.",
"To address these problems, we propose MATINF-SUMM , a new abstractive Chinese summarization dataset.",
"We present Mat ernal and Inf ant (MATINF ) Dataset, a large-scale dataset jointly labeled for classification, question answering and summarization in the domain of maternity and baby caring in Chinese.",
"An entry in the dataset includes four fields: question (Q) , description (D) , class (C) and answer (A) .",
"An example is shown in Figure 1, and the average character and word numbers of each field are reported in Table 1. We collect nearly two million question-answer pairs with fine-grained human-labeled classes from a large Chinese maternity and baby caring QA site.",
"We conduct both automatic and manual data cleansing and remove: (1) classes with insufficient samples; (2) entries in which the length of the description filed is less than the length of the question field; (3) data with any field longer than 256 characters; (4) human-spotted ill-formed data.",
"After the data cleansing, we construct MATINF with the remaining 1 .",
"07 million entries.",
"We first randomly split the whole data into training, validation and test sets with a proportion of 7:1:2.",
"Then, we use the splits for summarization and QA.",
"For classification, we further divide the data into two sub-tasks according to different classification standards within each split.",
"In MATINF , the class labels are first selected by the users when submitting a question.",
"Then, if the question is not in the right class, the forum administrators would manually re-categorize the question to the correct class.",
"In our data, there are two parallel standards for classifying a question: topic class and age of the baby .",
"We use these two standards to construct our two subsets.",
"Thus, we define two tasks: (1) classifying a question to different age groups; (2) classifying a question into a fine-grained topic.",
"We list the classes of the two tasks in Table 2. Note that there is no data overlap MATINF-C-TOPICMATINF-C-AGE 18 classes 3 classes postpartum health care 0-1 0-1 yr old child allergy 1-2 1-2 yrs old motion development 2-3 2-3 yrs old infant health care infant psychology early education infant feeding infant nutrition pregnancy care family education kindergarten pregnancy preparation infertility problem vaccination skin care infant ulcer diarrhea other infant common diseases Table 2: Class names of two subsets and their English translations.",
"between the two subsets.",
"Formally, we define the task as predicting the class of a QA pair with its question and description fields (i.e., Q, D C ).",
"Different from previous datasets, our task is a fine-grained classification (i.e., to classify documents in a domain) rather than classifying general topics (e.g., politics, sports, entertainments), which means the semantic difference between classes is prominently smaller.",
"It requires meticulous exploitation of semantics instead of recognizing unique n-gram features for each class.",
"We provide statistical comparison of MATINF-C with other datasets in Table 3. 3.2 MATINF-QA: Health-Domain Question Answering Typically, to return an answer for a specific question, the model needs to retrieve from a pre-defined document set or query a manually-constructed knowledge base.",
"MS-MARCO (Nguyen et al., 2016) utilizes a search engine to pre-filter 10 documents from the Internet and uses them as the document set.",
"However, searching itself is a challenging task that significantly affects the final performance.",
"On the other hand, in a real-world scenario, it is impossible to define a document set covering all knowledge needed to answer a user question.",
"Thus, we provide the training set of MATINF-QA as the possible document source and encourage all kinds of methods including retrieval, generation and hybrid models.",
"Formally, the task is defined as replying a question with natural text (i.e., Q A ).",
"The large scale of our dataset ensures that a model is able to generalize and learn enough knowledge to answer a user question.",
"Note that we do not use description when defining this task since we observe a negative effect on the generalization in our experiment.",
"Shown in Table 4, we list statistics of MATINF-QA and other commonly-used datasets.",
"All current datasets for summarization to date are in the domain of news and academic articles.",
"However, as a custom of the report and academic writing, in extractive datasets, the summary-eligible contents often appear at the beginning or the end of an article, preventing the summarization model from a full understanding and resulting in impractically high performance in evaluation.",
"On the other hand, current abstractive datasets are all formal news datasets, which are in lack of diversity.",
"Models trained on such a single-source dataset is not robust enough to handle real-world complexity.",
"In MATINF-SUMM , question description can be seen as an extended and specific version of the question itself, containing more detailed background information with respect to the question.",
"Besides, the question itself is often a well-formed interrogative sentence rather than extracted phrases.",
"Our task is to generate the question from the corresponding description (i.e., D Q ).",
"Note that our task itself can support many meaningful real-world applications, e.g., generating an informative title for user-generated content (UGC).",
"Also, there is only one public dataset for summarization in Chinese to date.",
"Our dataset can be used to verify the effectiveness of existing models and eliminate the Dataset Lang.",
"overfitting bias caused by evaluation on merely one dataset.",
"We compare MATINF-SUMM with other datasets in Table 5.",
"Recently, many attempts have been made on multitask learning in NLP (Liu et al., 2015; Luong et al., 2016; Guo et al., 2018; McCann et al., 2018; Xu et al., 2019; Ruder et al., 2019; Liu et al., 2019; Radford et al., 2019; Dong et al., 2019; Shen et al., 2019; Raffel et al., 2019; Lei et al., 2020) and several benchmarks are available for multi-task evaluation (Wang et al., 2019a,b).",
"Though recent studies show that multi-task learning is effective, there is still one more question to answer.",
"That is, when training models on multiple tasks, multiple datasets are used by default.",
"As illustrated in Figure 2(a), it adds both new input (i.e., text, denoted as X ) and new supervision (i.e., ground truths, denoted as Y ).",
"Due to the different processes of data collection, X in different datasets have different sources and properties.",
"Recent progress on Language Modeling (Radford et al., 2019; Devlin et al., 2019; Yang Multi-task Model Taskspecific 1 Traditional X 1 Y 1 Y 2 MTF-S2S MATINFX Y 1 Y 2 X 2 Shared Layer Layer sharing Input Taskspecific 2 Task-specific 1 Shared Module I n p u t Task 1 Task 2 Module sharing Task-specific 2 Task 1 Task 2 (a)",
"et al., 2019; Raffel et al., 2019) has proved that corpora ( X ) from different sources can make the model more robust and significantly improve the performance.",
"To this end, it is not easy to determine whether the success of a multi-task model should be mainly attributed to the addition of X or Y .",
"However, as depicted in Figure",
"2(b), in MATINF , our jointly labeled fashion can guarantee that X remains the same as in a single task and only Y is added.",
"Thus, MATINF provides a fair and ideal stage for exploring multi-task learning, especially auxiliary and multi-task supervision under a single dataset.",
"To set a baseline and also inspire future research, we design a multi-task learning network, named D 0 D 1 D 2 D n (cid:31)(cid:86)(cid:33) (cid:31)(cid:72)(cid:33) (cid:3183) (cid:2617) (cid:335) (cid:335) DescriptionEncoder Q 0 Q 1 Q 2 Q n (cid:31)(cid:86)(cid:33) (cid:31)(cid:72)(cid:33) (cid:2602) (cid:2602) (cid:335) (cid:335) SharedQuestionEncoder/Decoder (cid:31)(cid:72)(cid:33) (cid:1322) (cid:4913) (cid:335) whendecode whenencode (cid:4108) A 0 A 1 A 2 A n (cid:31)(cid:86)(cid:33) (cid:31)(cid:72)(cid:33) (cid:2825) (cid:1157) (cid:335) (cid:335) AnswerDecoder (cid:3979) Classifier SharedFC Layer AgeClassifier TopicClassifier 0 (cid:16)(cid:20)(cid:2702) (cid:19)(cid:16)(cid:20)(cid:3)(cid:92)(cid:85)(cid:3)(cid:82)(cid:79)(cid:71) (cid:2518)(cid:2846)(cid:1499)(cid:1547) (cid:76)(cid:81)(cid:73)(cid:68)(cid:81)(cid:87)(cid:3)(cid:75)(cid:72)(cid:68)(cid:79)(cid:87)(cid:75)(cid:3)(cid:70)(cid:68)(cid:85)(cid:72) Figure 3: The architecture of MTF-S2S.",
"M ultit ask F ield-shared S equence to S equence (MTF-S2S).",
"We illustrate the architecture of MTF-S2S in Figure 3. For generation tasks, we combine the summarization ( D Q ) and QA ( Q A ) to be the form of D Q A , with a shared Long Short-Term Memory (LSTM) for decoding questions in summarization task and encoding questions for both QA and classification tasks.",
"Previous studies often share layers among tasks to regularize the representation learning, as illustrated in Figure",
"2(c).",
"Different from that, MTF-S2S shares on both module level (i.e., field encoder/decoder, as shown in Figure",
"2(d)) and layer level.",
"An attention mechanism is applied when decoding for summarization and QA.",
"Also, we concatenate the encoded representations of description and question, and feed it to a shared fully connected layer and then specialized fully connected layers for age classification and topic classification, respectively.",
"When training, since the sizes of datasets for different tasks are not equal, we first determine the batch size for different tasks to make sure that the training progress for each task is approximately synchronized by: a, b T, bs a /bs b = n a /n b (1) where T includes four tasks: summarization, QA, and two classification tasks.",
"bs is the batch size of each task, and n is the sample numbers in each dataset for the task.",
"If one task is iterated to the last data batch, it will start over from the first batch.",
"For each iteration, we successively calculate the losses by Cross Entropy for each task in one batch.",
"Then, we train the model to minimize the total loss: L = (cid:88) t i T i L i (2) where is the manually set weight for each task.",
"We stop the co-training after one epoch, then fine-tune the model to obtain the peak performance for each task, separately.",
"In this section, we benchmark a few baselines and MTF-S2S on the three tasks of MATINF .",
"We run each experiment with three different random seeds and report the average result of the three runs.",
"MTF-S2S.",
"For MTF-S2S, we set all i = 0 .",
"25 and use an Adam (Kingma and Ba, 2015) optimizer to co-train the model for one epoch with batch sizes of 64 , 64 , 12 and 52 for bs Summ , bs QA , bs CTopic , and bs CAge respectively with a learning rate of 0 .",
"001 .",
"Then we fine-tune the model for each task with a learning rate of 5 10 5 .",
"We report both the performance after co-training and after fine-tuning.",
"The hidden size of all LSTM encoders/decoders and attentions is 200 .",
"For all tasks, we separately train MTF-S2S on each task only to provide a single-task baseline.",
"Both MTF-S2S and Seq2Seq baselines are character-based and their embeddings are initialized with Tencent AI Lab Embedding (Song et al., 2018).",
"For both MTF-S2S and Seq2Seq baselines, we use Beam Search (Wiseman and Rush, 2016) when decoding.",
"Classification.",
"For classification, we conduct experiments with a statistical learning baseline, several deep neural networks and pretrained large-scale language models.",
"For the statistical baselines, we extract character-based unigram and bigram features and use a logistic classifier to predict the classes.",
"For neural networks, we choose fastText (Grave et al., 2017), Text CNN (Kim, 2014), DCNN (Kalchbrenner et al., 2014), RCNN (Lai et al., 2015) and DPCNN (Johnson and Zhang, 2017).",
"As a classical step in Chinese text classification, we segment the sentences into words with Jieba 2 , a commonly used out-of-the-box word segmentation toolkit.",
"We then initialize the word embedding with pretrained Tencent AI Lab Embedding (Song et al., 2018) except for fastText, which has its own algorithm to construct word embeddings.",
"We minimize the Cross-Entropy with Adam (Kingma and Ba, 2015) optimizer with a learning rate of 0 .",
"001 and apply early stopping.",
"For language models, we fine-tune BERT (Devlin et al., 2019) and ERNIE (Sun et al., 2019) that both have released official pretrained Chinese models.",
"We set the learning rate for fine-tuning to 5 10 5 and apply early stopping.",
"We also compress the fine-tuned 12-layer BERT model with BERT-of-Theseus (Xu et al., 2020) and obtain the performance of a 6-layer model.",
"Question Answering.",
"For retrieval-based QA, following MS-MARCO (Nguyen et al., 2016), we calculate the average best scores between each answer in the test set and all answers in the training set within the same class, to determine the oracle retrieval performance.",
"Then, we construct our retrieval-based baseline by fine-tuning BERT-Base (Devlin et al., 2019) for question matching on an external dataset, LCQMC (Liu et al., 2018).",
"Then we use the trained model to score the match between each question in the test set and all questions in the training set with the same class and return the answer of the top 1 matched question.",
"For generation-based baselines, we use character-based Seq2Seq (Sutskever et al., 2014) and Seq2Seq with Attention (Luong et al., 2015), since character-based method has a prominently better performance for Chinese text generation (Hu et al., 2015; Li et al., 2019).",
"The metric for evaluation are ROUGE scores (Lin and Hovy, 2003) calculated on the character level.",
"Summarization.",
"We categorize the baselines into two fashions: extractive methods (i.e., extracting sentences or phrases from the text) and abstractive methods (i.e., generating summaries according to the text).",
"For extractive methods, we choose two widely used classical methods, TextRank (Mi-halcea and Tarau, 2004) and LexRank (Erkan and 2 https://github.com/fxsjy/jieba .",
"Radev, 2004).",
"For abstractive methods, we use WEAN (Ma et al., 2018) and Global Encoding (Lin et al., 2018) along with Seq2Seq (Sutskever et al., 2014; Luong et al., 2015) as the baselines.",
"We also add BertAbs (Liu and Lapata, 2019), a BERT-based summarization model, to reflect the recent progress on this task.",
"We use the officially released Chinese BERT-Base as the backbone.",
"We use ROUGE scores (Lin and Hovy, 2003) to evaluate the quality of generated summaries.",
"Classification.",
"We show the experimental results of two classification sub-tasks in Table 6. On the tougher MATINF-C-TOPIC , language models prominently outperform other baselines.",
"Among non-LM neural networks, DPCNN (Johnson and Zhang, 2017), which has the deepest architecture and the most parameters, outperforms other baselines with a considerable margin.",
"On MATINF-C-AGE , which is a smaller dataset with fewer classes, DPCNN outperforms all other baselines including CNN/DM LCSTS MATINF-SUMM Method R-1 R-2 R-L R-1 R-2 R-L R-1 R-2 R-L TextRank (Mihalcea and Tarau, 2004) 37.72 15.59 33.81 24.38 11.97 16.76 35.53 25.78 36.84 LexRank (Erkan and Radev, 2004) 33.98 11.79 30.17 22.15 10.14 14.65 33.08 23.31 34.96 Seq2Seq (Sutskever et al., 2014) ---23.05 11.44 19.55 Seq2Seq + Att (Luong et al., 2015) 31.33 11.81 28.83 33.80 23.10 32.50 43.05 28.03 38.58 WEAN (Ma et al., 2018) --37.80 25.60 35.20 34.63 22.56 28.92 Global Encoding (Lin et al., 2018) --39.40 26.90 36.50 49.28 34.14 47.64 BertAbs (Liu and Lapata, 2019) 40.21 17.76 37.09 --57.31 44.05 55.93 MTF-S2S (single task) 31.36 11.80 28.88 33.75 23.20 32.51 43.02 28.05 38.55 MTF-S2S ---48.59 35.69 43.28 Table 8: Experimental results of baseline methods on CNN / DM (Hermann et al., 2015), LCSTS (Hu et al., 2015), and MATINF-SUMM .",
"language models with an accuracy of 91 .",
"02 .",
"To analyze, this task has fewer training samples, which is in favor of a model with moderate parameter numbers instead of huge parameter numbers as in language models.",
"Also, the task is relatively easier due to the class number, which makes the advantage of language models more trivial.",
"For the multi-task baseline, MTF-S2S shows a satisfying performance on both MATINF-C-AGE and MATINF-C-TOPIC , outperforming the same model which is only trained on the single task by 0 .",
"14 and 0 .",
"19 in terms of accuracy.",
"Notably, BERT-of-Theseus (Xu et al., 2020) has a satisfying performance compressing the fine-tuned BERT to smaller models.",
"Question Answering.",
"The experimental results are shown in Table 7. The high scores of Best Passage (maximum possible performance) indicate that using training data as a document set is completely feasible.",
"Seq2Seq with Attention outperforms the retrieval-based baseline by a margin of 2 .",
"56 in terms of ROUGE-L.",
"It suggests that a generation-based neural network can effectively learn from multiple relevant samples and generalize.",
"Besides, since we do the matching between each question and every entry within the same class in the training set, the inference of BERT Matching takes quite a long time.",
"Similar to MS-MARCO (Nguyen et al., 2016), it is possible to use a search engine (e.g., Elastic Search) to pre-filter the documents and reduce the computational cost.",
"Meanwhile, MTF-S2S is effective on QA task and outperforms its single-task version by 0 .",
"74 on ROUGE-L.",
"Summarization.",
"We further conduct performance comparison for summarization across three datasets, CNN/DM (Hermann et al., 2015), LCSTS (Hu et al., 2015), and our MATINF-SUMM in Table 8. By comparing the performance of two ba-sic baselines, TextRank (Mihalcea and Tarau, 2004) and Seq2Seq+Att (Luong et al., 2015), we can see an obvious difference in performance between extractive and abstractive methods on datasets of different genres.",
"BertAbs (Liu and Lapata, 2019), the powerful BERT-based model, significantly outperforms all other baselines on MATINF-SUMM thanks to its exploitation of pretraining and the capacity of a BERT model.",
"For MTF-S2S, it outperforms the single-task counterpart by 4 .",
"73 on ROUGE-L.",
"Since MATINF is a web-crawled dataset, it would be inevitable to be noisier than a dataset annotated by hired annotators though we have made every effort to clean the data.",
"On the bright side, it can encourage more robust models and facilitate real-world applications.",
"For future work, we would like to see more interesting work exploring new multi-task learning approaches.",
"To conclude, in this paper, we present MATINF , a jointly labeled large-scale dataset for classification, question answering and summarization.",
"We benchmark existing methods and a straightforward baseline with a novel multi-task paradigm on MATINF and analyze their performance on these three tasks.",
"Our extensive experiments reveal the potential of the proposed dataset for accelerating the innovations in the three tasks and multi-task learning.",
"We are grateful for the insightful comments from the anonymous reviewers.",
"This research was supported by National Natural Science Foundation of China (No. 61872278).",
"Chenliang Li is the corresponding author."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"Knowledge Graph Embeddings (KGEs) have been intensively explored in recent years due to their promise for a wide range of applications.",
"However, existing studies focus on improving the final model performance without acknowledging the computational cost of the proposed approaches, in terms of execution time and environmental impact.",
"This paper proposes a simple yet effective KGE framework which can reduce the training time and carbon footprint by orders of magnitudes compared with state-of-the-art approaches, while producing competitive performance.",
"We highlight three technical innovations: full batch learning via relational matrices, closed-form Orthogonal Procrustes Analysis for KGEs, and non-negative-sampling training.",
"In addition, as the first KGE method whose entity embeddings also store full relation information, our trained models encode rich semantics and are highly interpretable.",
"Comprehensive experiments and ablation studies involving 13 strong baselines and two standard datasets verify the effectiveness and efficiency of our algorithm.",
"The recent growth in energy requirements for Natural Language Processing (NLP) algorithms has led to the recognition of the importance of computationally cheap and eco-friendly approaches (Strubell et al., 2019).",
"The increase in computational requirements can, to a large extent, be attributed to the popularity of massive pre-trained models, such as Language Models (e.g., BERT (De-vlin et al., 2019) and GPT-3 (Brown et al., 2020)) and Knowledge Graph Embeddings (KGEs, e.g., SACN (Shang et al., 2019)), that require significant resources to train.",
"A number of solutions have been proposed such as reducing the number of parameters the model contains.",
"For instance, Sanh et al. (2019) introduced a distilled version of BERT and Chenghua Lin is the corresponding author.",
"Zhang et al. (2019) decreased the parameters used for training KGEs with the help of the quaternion.",
"In contrast with previous work, this paper explores algorithmic approaches to the development of efficient KGE techniques.",
"Knowledge Graphs are core to many NLP tasks and downstream applications, such as question answering (Saxena et al., 2020), dialogue agents (He et al., 2017), search engines (Dong et al., 2014) and recommendation systems (Guo et al., 2020).",
"Facts stored in a knowledge graph are always in the format of tuples consisting of one head entity, one tail entity (both are nodes in knowledge graphs) and a relation (an edge in knowledge graphs) between them.",
"KGEs learn representations of relations and entities in a knowledge graph, which are then utilised in downstream tasks like predicting missing relations (Bordes et al., 2013; Sun et al., 2019; Tang et al., 2020).",
"The application of deep learning has led to significant advances in KGE (Rossi et al., 2021).",
"Nonetheless, such approaches are computationally expensive with associated environmental costs.",
"For example, training the SACN model (Shang et al., 2019) can lead to emissions of more than 5.3kg CO 2 (for more data of other algorithms, see Tab. 2).",
"To alleviate the computational cost we introduce PROCRUSTES , a lightweight, fast, and eco-friendly KGE training technique.",
"PROCRUSTES is built upon three novel techniques.",
"First, to reduce the batch-wise computational overhead, we propose to parallelise batches by grouping tuples according to their relations, which ultimately enables efficient full batch learning.",
"Second, we turn to a closed-form solution for Orthogonal Procrustes Problem to boost the embedding training, which has never been explored in the context of KGEs.",
"Third, to break though the bandwidth bottleneck, our algorithm is allowed to be trained without negative samples.",
"Figure 1: The by-relation partitioning architecture of PROCRUSTES for a toy graph (left).",
"Matrices involved in the computation of Eq.",
"(1) are divided into two relational matrices: the upper is for relation 1 ( dashed ) and the lower is for relation 2 ( solid ).",
"lar datasets (WN18RR and FB15k-237) against 13 strong baselines.",
"Experimental results show that PROCRUSTES yields performance competitive with the state-of-the-art while also reducing training time by up to 98.4% and the carbon footprint by up to 99.3%.",
"In addition, we found that our algorithm can produce easily interpretable entity embeddings with richer semantics than previous approaches.",
"Our code is available at https://github.com/Pzoom522/ ProcrustEs-KGE .",
"Our contribution is three-fold: (1) We introduce three novel approaches to substantially reduce computational overhead of embedding large and complex knowledge graphs: full batch learning based on relational matrices, closed-form Orthogonal Procrustes Analysis for KGEs, and non-negative-sampling training.",
"(2) We systemically benchmark the proposed algorithm against 13 strong baselines on two standard datasets, demonstrating that it retains highly competitive performance with just order-of-minute training time and emissions of less than making two cups of coffee.",
"(3) We successfully encode both entity and relation information in a single vector space for the first time, thereby enriching the expressiveness of entity embeddings and producing new insights into interpretability.",
"We propose a highly efficient and lightweight method for training KGEs called PROCRUSTES , which is more efficient in terms of time consumption and CO 2 emissions than previous counterparts",
"by orders of magnitude while retaining strong performance.",
"This is achieved by introducing three novel optimisation strategies, namely, relational mini-batch, closed-form Orthogonal Procrustes Analysis, and non-negative sampling training.",
"Our proposed PROCRUSTES model is built upon segmented embeddings , a technique which has been leveraged by a number of promising recent approaches to KGE learning (e.g., RotatE (Sun et al., 2019), SEEK (Xu et al., 2020), and OTE (Tang et al., 2020)).",
"In contrast to conventional methods for KGEs where each entity only corresponds to one single vector, algorithms adopting segmented embeddings explicitly divide the entity representation space into multiple independent sub-spaces.",
"During training each entity is encoded as a concatenation of decoupled sub-vectors (i.e., different segments, and hence the name).",
"For example, as shown in Fig. 1, to encode a graph with 7 entities, the embedding of the t th entity is the row-wise concatenation of its d/d s sub-vectors (i.e., e t, 1 (cid:95) e t, 2 (cid:95) . . . (cid:95) e t,d/d s ), where d and d s denote the dimensions of entity vectors and sub-vectors, respectively.",
"Employing segmented embeddings permits parallel processing of the structurally separated sub-spaces, and hence significantly boosts the overall training speed.",
"Furthermore, segmented embeddings can also enhance the overall expressiveness of our model, while substantially reducing the dimension of matrix calculations.",
"We provide detailed discussion on the empirical influence of segmented embedding setups in 3.4.",
"Full batch learning via relational matrices.",
"Segmented embeddings can speed up training process by parallelising tuple-wise computation.",
"In this section, we propose a full batch learning technique via relational matrices, which can optimise batch-wise computation to further reduce training time.",
"This idea is motivated by the observation that existing neural KGE frameworks all perform training based on random batches constructed from tuples consisting of different types of relations (Bor-des et al., 2013; Trouillon et al., 2016; Schlichtkrull et al., 2018; Chami et al., 2020).",
"Such a training paradigm is based on random batches which, although straightforward to implement, is difficult to parallelise.",
"This is due to the nature of computer process scheduling: during the interval between a process reading and updating the relation embeddings, they are likely to be modified by other processes, leading to synchronisation errors and consequently result in unintended data corruption, degraded optimisation, or even convergence issues.",
"To tackle this challenge, we propose to construct batches by grouping tuples which contain the same relations .",
"The advantage of this novel strategy is two-fold.",
"For one thing, it naturally reduces the original tuple-level computation to simple matrix-level arithmetic.",
"For another and more importantly, we can then easily ensure that the embedding of each relation is only accessible by one single process .",
"Such a training strategy completely avoids the data corruption issue.",
"In addition, it makes the employment of the full batch learning technique (via relational matrices) possible, which offers a robust solution for parallelising the KGEs training process and hence can greatly enhance the training speed.",
"To the best of our knowledge, this approach has never been explored by the KGE community.",
"As illustrated in Fig. 1, we first separate the embedding space into segments (cf. 2.1) and arrange batches based on relations.",
"After that, for each training step, the workflow of PROCRUSTES is essentially decomposed into m d/d s parallel optimisation processes, where m is the number of relation types.",
"Let i and j denote the indices of relation types and sub-spaces, respectively, then the column-wise concatenations of the j th sub-vectors of all tuples of i th relations can be symbolised as H i,j (for head entities) and T i,j (for tail entities).",
"Similarly, R i,j denotes the corresponding relation embedding matrix in the j th sub-space.",
"The final objective function of PROCRUSTES becomes L = m (cid:88) i =1 d/d s (cid:88) j =1 || H i,j R i,j T i,j || 2 .",
"Orthogonal Procrustes Analysis.",
"Our key optimisation objective, as formulated in Eq.",
"(1), is to minimise the Euclidean distance between the head and tail matrices for each parallel process.",
"In addition, following Sun et al. (2019) and Tang et al. (2020), we restrict the relation embedding matrix R i,j to be orthogonal throughout model training, which has been shown effective in improving KGE quality.",
"Previous KGE models use different approaches to impose orthogonality.",
"For instance, RotatE (Sun et al., 2019) takes advantage of a corollary of Euler's identity and defines its relation embedding as R i,j = (cid:20) cos i,j sin i,j sin i,j cos i,j (cid:21) , (2) which is controlled by a learnable parameter i,j .",
"Although Eq.",
"(2) holds orthogonality and retains simplicity, it is essentially a special case of segmented embedding where d s equals 2.",
"As a result, R i,j is always two-dimensional, which greatly limits the modelling capacity (see 3.4 for discussion on the impact of dimensionality).",
"To overcome this limitation, OTE (Tang et al., 2020) explicitly orthogonalises R i,j using the Gram-Schmidt algorithm per back-propagation step (see Appendix A for details).",
"However, while this scheme works well for a wide range of d s (i.e., the dimension for the sub-vector), similar to RotatE, OTE finds a good model solution based on gradient descent, which is computationally very expensive.",
"We address the computational issue by proposing a highly efficient method utilising the proposed parallelism of full batch learning.",
"With full batch learning, comparing with existing methods which deal with heterogeneous relations, PROCRUSTES only needs to optimise one single R i,j in each process, which becomes a simple constrained matrix regression task.",
"More importantly, through Singular Value Decomposition (SVD), we can derive an closed-form solution (Schnemann, 1966) as R (cid:63)i,j = UV (cid:124) , w/ U V (cid:124) = SVD( H (cid:124) i,j T i,j ) , (3) where R (cid:63)i,j denotes the optima.",
"optimal embedding for each relation given the current entity embeddings by applying Eq.",
"(3).",
"Then, based on the calculated L , PROCRUSTES updates entity embeddings through the back propagation mechanism (NB: the relation embeddings do not require gradients here).",
"This process is repeated until convergence.",
"As the optimisation of relation embeddings can be done almost instantly per iteration thanks to the closed-form Eq.",
"(3), PROCRUSTES is significantly (orders of magnitude) faster than RotatE and OTE.",
"In addition, compared with entity embeddings of all other KGE models which are updated separately with relation embedding, entity embeddings trained by PROCRUSTES can be used to restore relation embeddings directly (via Eq.",
"(3)).",
"In other words, PROCRUSTES can encode richer information in the entity space than its counterparts (see 3.5).",
"Further optimisation schemes.",
"As recently surveyed by Ruffinelli et al. (2020), existing KGE methods employ negative sampling as a standard technique for reducing training time, where update is performed only on a subset of parameters by calculating loss based on the generated negative samples.",
"With our proposed closed-form solution (i.e., Eq.",
"(3)), computing gradients to update embeddings is no longer an efficiency bottleneck for PROCRUSTES .",
"Instead, the speed bottleneck turns out to be the extra bandwidth being occupied due to the added negative samples.",
"Therefore, for PROCRUSTES , we do not employ negative sampling but rather update all embeddings during each round of back propagation with positive samples only, in order to further optimise the training speed (see Appendix B for bandwidth comparisons against baselines which adopts negative sampling).",
"We also discovered that if we do not apply any additional conditions during training, PROCRUSTES tends to fall into a trivial optimum after several updates, i.e., L = 0 , with all values in H i,j , T i,j and R i,j being zero.",
"In other words, the model collapses with nothing encoded at all.",
"This is somewhat unsurprising as such trivial optima often yields large gradient and leads to this behaviour (Zhou et al., 2019).",
"To mitigate this degeneration issue, inspired by the geometric meaning of orthogonal R i,j (i.e., to rotate H i,j towards T i,j around the coordinate origin, without changing vector length), we propose to constrain all entities to a high-dimensional hypersphere by performing two spherisation steps in every epoch.",
"The first FB15k-237 WN18RR Entities 14,541 40,943 Relations 237 11 Train samples 272,115 86,835 Validate samples 17,535 3,034 Test samples 20,466 3,134 Table 1: Basic statistics of the two benchmark datasets.",
"technique, namely centring , respectively translates H i,j and T i,j so that the column-wise sum of each matrix becomes a zero vector (note that each row denotes a sub-vector of an entity).",
"The second operation is length normalisation , which ensures the row-wise Euclidean norm of H i,j and T i,j to always be one.",
"Employing these two simple constraints effectively alleviates the trivial optimum issue, as evidenced in our experiments (see 3).",
"We assess the performance of PROCRUSTES on the task of multi-relational link prediction, which is the de facto standard of KGE evaluation.",
"Datasets.",
"In this study, following previous works (e.g., baselines in Tab. 2), we employ two benchmark datasets for link prediction: (1) FB15K-237 (Toutanova and Chen, 2015), which consists of sub-graphs extracted from Freebase, and contains no inverse relations; and (2) WN18RR (Dettmers et al., 2018), which is extracted from WordNet.",
"Tab.",
"1 shows descriptive statistics for these two datasets, indicating that FB15K-237 is larger in size and has more types relations while WN18RR has more entities.",
"We use the same training, validating, and testing splits as past studies.",
"Evaluation metrics.",
"Consistent with Sun et al. (2019) and Tang et al. (2020), we report Hit Ratio with cut-off values n = 1 , 3 , 10 (i.e., H1, H3, and H10) and Mean Reciprocal Rank (MRR).",
"Additionally, as to efficiency, we report the time cost and CO 2 emissions for each model, i.e., from the beginning of training until convergence.",
"Baselines.",
"We compare PROCRUSTES to not only classical neural graph embedding methods, including TransE (Bordes et al., 2013), Dist-Multi (Yang et al., 2015), and ComplEx (Trouil-lon et al., 2016), but also embedding techniques recently reporting state-of-the-art performance on either WN18RR or FB15k-237, including R-GCN (Schlichtkrull et al., 2018), ConvE (Dettmers WN18RR FB15k-237 MRR H1 H3 H10 MRR H1 H3 H10 TransE (2013) .226 -.501 85 367 .294 -.465 96 370 DistMult (2015) .430 .390 .440 .490 79 309 .241 .155 .263 .419 91 350 ComplEx (2016) .440 .410 .460 .510 130 493 .247 .158 .275 .428 121 534 R-GCN (2018) .417 .387 .442 .476 138 572 .248 .151 .264 .417 152 598 ConvE (2018) .430 .400 .440 .520 840 3702 .325 .237 .356 .501 1007 4053 A2N (2019) .450 .420 .460 .510 203 758 .317 .232 .348 .486 229 751 SACN (2019) .470 .430 .480 .540 1539 5342 .352 .261 .385 .536 1128 4589 TuckER (2019) .470 .443 .482 .526 173 686 .358 .266 .392 .544 184 704 QuatE (2019) .488 .438 .508 .582 176 880 .348 .248 .382 .550 180 945 InteractE (2020) .463 .430 -.528 254 1152 .354 .263 -.535 267 1173 RotH (2020) .496 .449 .514 .586 192 903 .344 .246 .380 .535 207 1120 RotatE (2019) .439 .390 .456 .527 255 823 .297 .205 .328 .480 343 1006 OTE (2020) .448 .402 .465 .531 304 1008 .309 .213 .337 .483 320 1144 PROCRUSTES (ours) .453 .408 .491 .549 14 37 .295 .241 .310 .433 9 42 w/ NS (ours) .457 .411 .494 .551 44 124 .302 .245 .333 .465 37 159 w/ TB (ours) .468 .417 .498 .557 92 268 .326 .247 .354 .492 56 243 w/ NS+TB (ours) .474 .421 .502 .569 131 346 .345 .249 .379 .541 85 285 Table 2: Model effectiveness and efficiency on link prediction benchmarks.",
"et al., 2018), A2N (Bansal et al., 2019), RotatE (Sun et al., 2019), SACN (Shang et al., 2019), TuckER (Balazevic et al., 2019), QuatE (Zhang et al., 2019), InteractE (Vashishth et al., 2020), OTE (Tang et al., 2020), and RotH (Chami et al., 2020).",
"For all these baselines, we use the official code and published hyper-parameters to facilitate reproducibility.",
"Implementation details.",
"All experiments are conducted on a workstation with one NVIDIA GTX 1080 Ti GPU and one Intel Core i9-9900K CPU, which is widely applicable to moderate in-dustrial/academic environments.",
"We use the Experiment Impact Tracker (Henderson et al., 2020) to benchmark the time and carbon footprint of training.",
"To reduce measurement error, in each setup we fix the random seeds, run PROCRUSTES and all baselines for three times and reported the average.",
"The key hyper-parameters of our model is d and d s , which are respectively set at 2K and 20 for both datasets.",
"The detailed selection process is described in 3.4.",
"We train each model for a maximum of 2K epochs and check if the validation MRR stops increasing every 100 epochs after 100 epochs.",
"For WN18RR and FB15k-237 respectively, we report the best hyperparameters as fixed learning rates of 0.001 and 0.05 (Adam optimiser), and stopping epochs of 1K and 200.",
"Tab.",
"2 reports the results of both our PROCRUSTES and all other 13 baselines on both WN18RR and FB15k-237 datasets.",
"We analyse these results from two dimensions: (1) Effectiveness : the model performance on link prediction task (MRR is our main indicator); (2) Efficiency : system training time and carbon footprint (i.e., CO 2 emissions).",
"Regarding the performance on WN18RR, we found that PROCRUSTES performs as good as or even better than previous state-of-the-art approaches.",
"To be concrete, out of all 13 baselines, it beats 11 in H10, (at least) 9 in H3 and 8 in MRR.",
"The models outperformed by PROCRUSTES include not only all methods prior to 2019, but also several approaches published in 2019 or even 2020.",
"Notably, when compared with the RotatE and OTE, two highly competitive methods which have similar architectures to PROCRUSTES (i.e., with segmented embeddings and orthogonal con-straints), our PROCRUSTES can learn KGEs with higher quality (i.e., 0.014 and 0.005 higher in MRR, respectively).",
"This evidences the effectiveness of the proposed approaches in 2 in modelling knowledge tuples.",
"While PROCRUSTES achieves very competitive performance, it requires significantly less time for training: it converges in merely 14 minutes , more than 100 times faster than strong-performing counterparts such as SACN.",
"Moreover, it is very envi-0 .",
"ronmentally friendly: from bootstrapping to convergence, PROCRUSTES only emits 37g of CO 2 , which is even less than making two cups of cof-fee 1 .",
"On the contrary, the baselines emit on average 1469g and up to 5342g CO 2 : the latter is even roughly equal to the carbon footprint of a coach ride from Los Angeles to San Diego 2 .",
"As for the testing results on FB15k-237, we found that although PROCRUSTES seems less outstanding (we investigate the reasons in 3.3), it still outperforms at least 7 more complex baselines in H1 and almost all models prior to 2019 in MRR.",
"Furthermore, similar to the observation on WN18RR, it demonstrates great advantage in terms of efficiency.",
"While all baselines need 91 to 1128 minutes to coverage with 350g to 4589g CO 2 produced, PROCRUSTES can learn embeddings of similar quality in just 9 minutes and with 42g emissions .",
"By employing both traditional batch and negative sampling, we show that PROCRUSTES can achieve near-state-of-the-art performance on both datasets.",
"We discuss this in detail in 3.3.",
"To provide a unified comparisons between PROCRUSTES and the most strong-performing baselines on both effectiveness and efficiency, we further investigate the following question: How much performance gain can we obtain by spending unit time on training or making unit emissions?",
"We did analysis by calculating MRR/(training time) and MRR/(carbon footprint) and the results are presented in Fig. 2.",
"It is obvious that among all competitive KGE models, PROCRUSTES is the most economic algorithm in terms of performance-cost trade-off: it is more than 20 times more efficient than any past works, in terms of both performance per unit training time and per unit CO 2 emissions.",
"We also investigate baseline performance with a 1 https://tinyurl.com/coffee-co2 2 https://tinyurl.com/GHG-report-2019 shorter training schedule.",
"From scratch, we train RotH, the best performing algorithm on WN18RR, and stop the experiment when MRR reaches the performance of PROCRUSTES .",
"On WN18RR, RotH takes 50 minutes (3.6 PROCRUSTES ) and emits 211g CO 2 (5.7 PROCRUSTES ); on FB15k-237 RotH takes 45 minutes (5.0 PROCRUSTES ) and emits 218g CO 2 (5.2 PROCRUSTES ).",
"These results once again highlight the efficiency superiority of our approach.",
"To better understand the performance difference of PROCRUSTES on WN18RR and FB15k-237, we dive deeply into the dataset statistics in Tab.",
"1.",
"Goyal et al. (2017) and Hoffer et al. (2017) found that although full batch learning can boost training speed and may benefit performance, when the data distribution is too sparse, it may be trapped into sharp minimum.",
"As the average number of samples linked to each relation is significantly smaller for FB15k-237 than for WN18RR (1148 vs 7894), the distribution of the former is likely to be more sparse and the generalisability of PROCRUSTES may thus be harmed.",
"For another, FB15k-237 has finer-grained relation types (237 vs . 11 of WN18RR), so intuitively the likelihood of tuples sharing similar relations rises.",
"However, as PROCRUSTES omits negative sampling to trade for speed, sometimes it maybe be less discriminative for look-alike tuples.",
"To validate the above hypotheses, we additionally conduct ablation studies by switching back to traditional batch mode and/or adding negative sampling modules 3 .",
"Configurations where the closed-form optimisation, Eq.",
"(3), is replaced by gradient descent are omitted since the resulting architecture is very similar to OTE.",
"As shown in the lower 3 Following Sun et al. (2019), we set the batch size at 1024 and the negative sample size at 128.",
"section of Tab.",
"2, both using either traditional or negative sampling (i.e., w/ NS and w/ TB) can improve the performance of PROCRUSTES for all metrics.",
"For example, on WN18RR our approach (w/ NS+TB) outperforms most baselines and is close to the performance of QuatE and RotH, but thanks to the Orthogonal Procrustes Analysis, the computational cost of our approach is significantly less.",
"Compared to WN18RR, the gain of our model on FB15k-237 by adopting negative sampling and traditional batch is even more significant, achieving near-state-of-the-art performance (i.e., compared to TuckER, the MRR is only 1.3% less with merely 4.9% of the computational time).",
"These observations verify our aforementioned hypotheses.",
"We also found out that traditional batch is more effective than negative sampling for PROCRUSTES in terms of improving model performance.",
"On the other hand, however, adding these two techniques can reduce the original efficiency of PROCRUSTES to some extend.",
"Nevertheless, as Eq.",
"(3) is not only fast but also energy-saving (as only basic matrix arithmetic on GPUs is involved), even PROCRUSTES with the w/ NS+TB configuration preserves great advantage in training time and carbon footprint.",
"Moreover, it achieves near-state-of-the-art effectiveness on both datasets (cf.",
"Tab.",
"2) and still exceeds strong baselines in training efficiency with large margins (cf. Fig. 2).",
"One interesting observation is that, while the training time of RotH is merely 1.47 of that of PROCRUSTES (w/ NS+TB), their emission levels are drastically different.",
"This is because RotH implements 24-thread multiprocessing by default while our approach creates only one process.",
"Within similar training time, methods like RotH will thus consume a lot more power and emit a lot more CO 2 .",
"Therefore, for effectiveness-intensive applications, we recommend training PROCRUSTES in transitional batches with negative sampling, as it can then yield cutting-edge performance without losing its eco-friendly fashion.",
"Our experiments also indicate that the selection of two dimensional hyper-parameters has substantial influence on both effectiveness and efficiency of PROCRUSTES .",
"For the dimension of the entire embedding space, we follow the recommendation of Tang et al. (2020) and set d s at 20.",
"We then train PROCRUSTES with d 250 500 750 1000 1250 1500 1750 2000 0 .",
"{ 100 , 200 , 400 , 800 , 1K , 1 .",
"5K , 2K } and plotted results based on the validation set, as shown in Fig. 3.",
"It is evident that with the increase of d , the model performance (indicated by MRR) grows but the training time also rises.",
"Observing the curvature of training time almost saturates when d (cid:62) 1K, we decide 2K as the best setting for both WN18RR and FB15k-237 given the 11GB graphics mem-ory limit of our hardware.",
"For the dimension of sub-embeddings, we fix d at 2K and enumerated d s { 2 , 5 , 10 , 20 , 25 , 50 , 100 } .",
"For algorithm performance, the pattern we witnessed is on par with that reported by Tang et al. (2020), i.e., before d s reaches 20 or 25 the effectiveness jumps rapidly, but after that the model slowly degrades, as the learning capacity of the network reduces.",
"Coincidentally, the training speed also climbs its peak when d s is 20, making it indisputably become our optimal choice.",
"Building on the fact that PROCRUSTES marry entity information and relation information (in other words, for a specific entity, the information of the entity itself and of its corresponding relations is encoded in a single vector), the location of a entity is more expressive and, thus, the related entity embedding is more interpretable.",
"Picking up on that, we do visualisation study on the trained entity embeddings.",
"To this end, we conduct dimension reduction on the embeddings using Principal Components Analysis (PCA), which reduces the dimensionality A1 A2 A3 B1 B2 C1 C2 D E F A1 chittagong, cartagena, pittsburgh_of_the_south, le_havre, nanning, stuttgart, kolkata, houston, windy_city, . . . A2 yellowstone_river, atlas_mountains, san_fernando_valley, sambre_river, nile_river, susquehanna_river, rhine_river, . . . A3 sudan, balkanshe_alps, east_malaysia, lower_egypt, kali-mantan, turkistan, tobago, lowlands_of_scotland, sicily, . . . B1 mefoxin, metharbita, valium, amobarbital, procaine, nitro-stat, tenormin, minor_tranquillizer, cancer_drug, . . . B2 epinephrine, steroid_hormone, internal_secretion, alkaloid, gallamine, prolactin, luteinizing_hormone, . . . C1 military_formation, retreat, tactics, strategic_warning, peacekeeping_operation, unauthorized_absence, . . . C2 commando, sailor_boy, outpost, saddam's_martyrs, military_advisor, battlewagon, commander, . . . D plaintiff, remitment, franchise, summons, false_pretens, suspect, amnesty, legal_principle, disclaimer, affidavit, . . . E genus_ambrosia, gloxinia, saintpaulia, genus_cestrum, genus_eriophyllum, valerianella, genus_chrysopsis, . . . F moneyer, teacher, researcher, president, prime_minister, wheeler_dealer, house_servant, victualler, burglar, . . . Figure 4: 3D PCA visualisation of PROCRUSTES entity embeddings for WN18RR.",
"of an entity embedding from 2K to three 4 .",
"Fig. 3 shows the visualisation result, from which we see a diagram with 6 arms.",
"This is far distinct from the distributional topology of conventional semantic representations, e.g., word embeddings (Mikolov et al., 2013) (see Appendix C).",
"In Fig. 3, we also list the representative entities that fall in some clusters on each arm.",
"Each cluster is referred by an ID (from A1 to F2).",
"When we zoom into this list, we observe something interesting: First , entities on the same arm are semantically similar, or, in other words, these entities belong to the same category.",
"Concretely, entities on arm A are locations, those on arm B are biochemical terms, and those on arm C are military related entities.",
"Entities on arm D, E, and F consists of entities refer to concepts of law, botany, and occupation, respectively.",
"Second , significant differences exist between each cluster/position on a arm.",
"One example is that, for arm A, A1 are entities for cities, such as Stuttgart , Houston , Nanning ; A2 is about entities for rivers, mountains,",
"etc.; and A3 contains entities referring to countries or regions.",
"Similarly, while B1 mainly consists of medicine names, entities in B2 obviously relate to chemical terms.",
"Last , PROCRUSTES can also put the nick name of a entity into the correct corresponding cluster.",
"For example, Windy City (i.e., Chicago) and Pittsburgh of the South (i.e, Birmingham) were successfully recognised as names for cities.",
"KGE techniques.",
"In recent years, a growing body of studies has been conducted on the matter of training KGEs.",
"Roughly speaking, these KGE methods fall into two categories: distance-based models and semantic matching models.",
"The line of researches regarding distance-based models, which measures plausibility of tuples by calculating distance between entities with additive functions, was initialised the KGE technique proposed by Bordes et al. (2013), namely, TransE.",
"After that, a battery of follow-ups have been proposed, including example models like TransH (Wang et al., 2014), TransR (Lin et al., 2015), and TransD (Ji et al., 2015).",
"These algorithms have enhanced ability on modelling complex relations by means of projecting entities into different (more complex) spaces or hyper-planes.",
"More recently, a number of studies attempt to further boost the quality of KGEs through a way of adding orthogonality constraints (Sun et al., 2019; Tang et al., 2020) for maintaining the relation embedding matrix being orthogonal, which is also the paradigm we follow in the present paper (see 2).",
"In contrast, semantic matching models measure the plausibility of tuples by computing the similarities between entities with multiplicative functions.",
"Such an similarity function could be realised using, for example, a bilinear function or a neural network.",
"Typical models in this line includes DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), ConvE (Dettmers et al., 2018), TuckER (Bal-azevic et al., 2019), and QuatE (Zhang et al., 2019).",
"Accelerating KGE training.",
"All those KGE approaches share the same issue of their low speed in both training and inference phases (see Rossi et al. (2021) for a controlled comparison of the efficiency across different methodologies).",
"In response to this issue, some state-of-the-art KGE algorithms attempted to accelerate their inference speed either through making use of the high-speed of the convolutional neural networks (Dettmers et al., 2018) or through reducing the scale of parameters of the model (Zhang et al., 2019; Zhu et al., 2020).",
"As for the acceleration of model training, a number of attempts have been conducted in a mostly engineering way.",
"These well-engineered systems adopt linear KGE methods to multi-thread versions in other to make full use of the hardware capacity (Joulin et al., 2017; Han et al., 2018), which accelerates training time of, for example, TransE, from more than an hour to only a couple of minutes.",
"Nonetheless, this line of work has two major issues: one is that training models faster in this way does not necessarily mean they also emit less, as process scheduling of a multi-thread system can be energy-consuming.",
"The other is that they are all extensions of linear KGE models only (also noting that linear models are naturally much faster than other non-linear models) without any algorithmic contribution, which leading to the performance of the resulting models limited by the upper bound of linear models (e.g., recent state-of-the-art methods in Tab. 2, such as RotH, are nonlinear approaches).",
"In this paper, we proposed a novel KGE training framework, namely PROCRUSTES , which is eco-friendly, time-efficient and can yield very competitive or even near-state-of-the-art performance.",
"Extensive experiments show that our method is valuable especially considering its significant and substantial reduction on training time and carbon footprint.",
"We provided a efficient KGE training framework in this paper.",
"The resulting KGEs, akin to all previous KGE models, might have been encoded with social biases, e.g., the gender bias (Fisher, 2020).",
"We suggest this problem should always be looked at critically.",
"For whoever tend to build their applications grounding on our KGEs, taking care of any consequences caused by the gender bias is vital since, in light of the discussion in Larson (2017), mis-gendering individuals/entities is harmful to users (Keyes, 2018).",
"Additionally, as having been proven in this paper, our method emits less greenhouse gases and therefore, has less negative environmental repercussions than any other KGE approaches.",
"This work is supported by the award made by the UK Engineering and Physical Sciences Research Council (Grant number: EP/P011829/1) and Baidu, Inc.",
"We would also like to express our sincerest gratitude to Chen Li, Ruizhe Li, Xiao Li, Shun Wang, and the anonymous reviewers for their insightful and helpful comments."
] | [
"abstain",
"abstain",
"objective",
"result",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Recurrent neural networks have achieved state-of-the-art results in many artificial intelligence tasks, such as language modeling, neural machine translation, speech recognition and so on.",
"One of the key factors to these successes is big models.",
"However, training such big models usually takes days or even weeks of time even if using tens of GPU cards.",
"In this paper, we propose an efficient architecture to improve the efficiency of such RNN model training, which adopts the group strategy for recurrent layers, while exploiting the representation rearrangement strategy between layers as well as time steps.",
"To demonstrate the advantages of our models, we conduct experiments on several datasets and tasks.",
"The results show that our architecture achieves comparable or better accuracy comparing with baselines, with a much smaller number of parameters and at a much lower computational cost.",
"Recurrent Neural Networks (RNNs) have been widely used for sequence learning, and achieved state-of-the-art results in many artificial intelligence tasks in recent years, including language modeling (Zaremba et al., 2014; Merity et al., 2017), neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2014), and speech recognition (Graves et al., 2013).",
"To get better accuracy, recent state-of-the-art RNN models are designed toward big scale, include going deep (stacking multiple recurrent layers) (Pascanu et al., 2013a) and/or going wide (in-creasing dimensions of hidden states).",
"For example, an RNN based commercial Neural Machine Translation (NMT) system would employ tens of layers in total, resulting in a large model with hundreds of millions of parameters (Wu et al., 2016).",
"However, when the model size increases, the computational cost, as well as the memory needed for the training, increases dramatically.",
"The training cost of aforementioned NMT model reaches as high as 10 19 FLOPs, and the training procedure spends several days with even 96 GPU cards (Wu et al., 2016) such complexity is prohibitively expensive.",
"While above models benefit from big neural networks, it is observed that such networks often have redundancy of parameters (Kim and Rush, 2016), motivating us to improve parameter efficiency and design more compact architectures that are more efficient in training while keeping good performance.",
"Recently, many efficient architectures for convolution neural networks (CNNs) have been proposed to reduce training cost in computer vision domain.",
"Among them, the group convolution is one of the most widely used and successful attempts (Szegedy et al., 2015; Chollet, 2016; Zhang et al., 2017b), which splits the channels into groups and conducts convolution separately for each group.",
"It's essentially a diagonal sparse operation to the convolutional layer, which reduces the number of parameters as well as the computation complexity linearly w.r.t. the group size.",
"Empirical results for such group convolution optimization show great speed up with small degradation on accuracy.",
"In contrast, there are very limited attempts for designing better architectures for RNNs.",
"Inspired by those works on CNNs, in this paper, we generalize the group idea to RNNs to conduct recurrent learning in the group level.",
"Different from CNNs, there are two kinds of parameter redundancy in RNNs: (1) the weight matrices transforming a low-level feature representation to a high-level one may contain redundancy, and (2) the recurrent weight matrices transferring the hidden state of the current step to the hidden state of the next step may also contain redundancy.",
"Therefore, when designing efficient RNNs, we need to consider both the kinds of redundancy.",
"We present a simple architecture for efficient sequence learning which consists of group recurrent layers and representation rearrangement layers.",
"First, in a recurrent layer, we split both the input of the sequence and the hidden states into disjoint groups, and do recurrent learning separately for each group.",
"This operation clearly reduces the model complexity, and can learn intragroup features efficiently.",
"However, it fails to capture dependency cross different groups.",
"To recover the inter-groups correlation, we further introduce a representation rearrangement layer between any two consecutive recurrent layers, as well as any two time steps.",
"With these two operations, we explicitly factorize a recurrent temporal learning into intra-group temporal learning and inter-group temporal learning with a much smaller number of parameters.",
"The group recurrent layer we proposed is equivalent to the standard recurrent layer with a block-diagonal sparse weight matrix.",
"That is, our model employs a uniform sparse structure which can be computed very efficiently.",
"To show the advantages of our model, we analyze the computation cost and memory usage comparing with standard recurrent networks.",
"The efficiency improvement is linear to the number of groups.",
"We conduct experiments on language modeling, neural machine translation and abstractive summarization by using a state-of-the-art RNN architecture as baseline.",
"The results show that our model can achieve comparable or better accuracy, with a much smaller number of parameters and in a shorter training time.",
"The remainder of this paper is organized as follows.",
"We first present our newly proposed architecture and conduct in depth analysis on its efficiency improvement.",
"Then we show a series of empirical study to verify the effectiveness of our methods.",
"Finally, to better position our work, we introduce some related work and then conclude our work.",
"In this section, we introduce our proposed architecture for RNNs.",
"Before getting into the details of the group recurrent layer and representation rearrangement layer in our architecture, we first revisit the vanilla RNNs.",
"An RNN is a neural network with recurrent layers that capture temporal dynamics of a sequence with arbitrary length.",
"It recursively applies a transition function to its internal hidden state for each symbol of input sequence.",
"The hidden state at time step t is computed as a function f of the current input symbol x t and the previous hidden state h t 1 in a recurrent form: h t = f ( x t , h t 1 ) .",
"where W is the input-to-hidden weight matrix, U is the state-to-state recurrent weight matrix, and tanh is the hyperbolic tangent function.",
"Our work is independent to the choices of the recurrent function ( f in Equation 1).",
"For simplicity, in the following, we take the vanilla RNN as an example to introduce and analyze our new architecture.",
"We aim to design an efficient RNN architecture by reducing the parameter redundancy while keeping accuracy at the same time.",
"Inspired by the success of group convolution in CNN, our architecture employs the group strategy to achieve a sparsely connected structure between neurons of recurrent layers, and employs the representation rearrangement to recover the correlation that may destroyed by the sparsity.",
"At a high level, we explicitly factorize the recurrent learning as intergroup recurrent learning and intra-group recurrent learning.",
"In the following, we will describe our RNN architecture in detail, which consists of a group recurrent layer for intra-group correlation and a representation rearrangement layer for intergroup correlation.",
"For standard recurrent layer, the model complexity increases quadratically with the dimension of hidden state.",
"Suppose the input x is with dimension M , while the hidden state is with dimension N .",
"Then, for standard vanilla RNN cell, according to Equation 2, the number of parameters, as well as the computation cost is N 2 + N M. (3) It's obvious that the hidden state dimension largely determines the model complexity.",
"Optimization on reducing computation w.r.t the hidden state is the key to improve the overall efficiency.",
"Accordingly, we present a group recurrent 800",
"layer which adopts a group strategy to approximate the standard recurrent layer.",
"Specifically, we consider to split both the input x t and hidden state h t into K disjoint groups as { x 1 t , x 2 t , ..., x K t } and { h 1 t , h 2 t , ..., h Kt } respectively, where x it , h it represent the input and hidden state for i -th group at time step t .",
"Based on this split, we then perform recurrent computation in every group independently.",
"This will captures the intra-group temporal correlation within the sequence.",
"Formally, in the group recurrent layer, we first compute the hidden state of each group h it as h it = f i ( x it , h it 1 ) , i = 1 , 2 , ..., K. (4) Then, concatenating all the hidden states from each group together, h t = concat ( h 1 t , h 2 t , ..., h Kt ) (5) we get the output of the group recurrent layer.",
"The group recurrent layer is illustrated as Figure",
"2(a) and Figure",
"1(b).",
"Obviously, by splitting the features and hidden states into K groups, the number of parameters and the computation cost of recurrent layer reduce to K (( N K ) 2 + N K M K ) = N 2 + N M K (6) Comparing Equation 3 with Equation 6, the group recurrent is K times more efficient than the standard recurrent layer, in terms of both computational cost and number of parameters.",
"Although the theoretical computational cost is attractive, the speedup ratio also depends on the",
"implementation details.",
"A naive implementation of Equation 4 would introduce a for loop, which is not efficient since the additional overhead and poor parallelism.",
"In order to really achieve linear speed up, we employ a batch matrix multiplication to assemble the computation of different groups in a single round of matrix multiplication.",
"This operation is critical especially when each group isn't big enough to fully utilize the entire GPU computation power.",
"Group recurrent layer is K times more efficient comparing with the standard recurrent layer.",
"But, it only captures the temporal correlation inside a single feature group and fails to learn dependency across features from different groups.",
"more specifically, the internal state of RNN only contains history from corresponding group (Figure 801",
"1(b)).",
"Similar problem also exists in the vertical direction of group recurrent layers (Figure",
"2(a)).",
"Consider a network with multiple stacked group recurrent layers, the output of the specific group are only get from the corresponding input group.",
"Obviously, there will be a significant drop of representation power since many feature correlations are cut off by this architecture.",
"To recover the inter-group correlations, one simple way is adding a projection layer to transform the hidden state outputted by the group recurrent layer, like the 1 1 convolution used in depthwise separable convolutional (Chollet, 2016).",
"However, such method would bring additional N 2 computation complexity and model parameters.",
"Inspired by the idea of permuting channels between convolutional layers in recent CNN architectures (Zhang et al., 2017a,b), we propose to add representation rearrangement layer between consecutive group recurrent layers (Figure",
"1(c)), as well as the time steps within a group recurrent layer (Figure",
"2(b)).",
"The representation rearrangement aims to rearrange the hidden representation, to make sure the subsequent layers, or time steps, can see features from all input groups.",
"The representation rearrangement layer is parameter-free and simple.",
"We leverage the same implementation in (Zhang et al., 2017b) to conduct the rearrangement.",
"It's finished with basic tensor operations reshape , and transpose , which brings (almost) no runtime overhead in our experiments.",
"Consider the immediate representation h t RN outputted by group recurrent layer with group number K .",
"First, we reshape the representation to add an additional group dimension, resulting in a tensor with new shape ( K, N/K ) .",
"Second, we transpose the two dimensions of the temporary tensor, changing the tensor shape to ( N/K, K ) .",
"Finally, we reshape the tensor along the first axis to restore the representation to its original shape (a vector of size N ).",
"Figure 3 illustrates the operations with a simple example whose representation is with size 8 and group number is",
"2. Combining the group recurrent layer and representation rearrangement layer, we rebuild the recurrent layer into an efficient and effective layer.",
"We note that, different from convolutional neural networks that are only deep in space, the stack RNNs are deep in both space and time.",
"Figure 1 illustrates our architecture along the spatial direction, and Figure 2 illustrates our architecture along the temporal direction.",
"By applying group operation and representation rearrangement in both space and time, we build a new recurrent neural network with high efficiency.",
"In this section, we analyze the relation between group recurrent layer and standard recurrent layer, and discuss the advantages of group recurrent networks.",
"From the reformulation, we can see group recurrent layer is equivalent to standard recurrent layer with block-diagonal sparse weight matrix.",
"Our method employs a group level sparsity in recurrent computation, leading to a uniform sparse structure.",
"This uniform sparse structure can enjoy the efficient computing of dense matrix, as we discussed in Section 2.1.",
"This reformulation also shows that there is no connection across neurons in different groups.",
"Increasing the group number will lead to higher sparse rate.",
"This sparse structure may limit the representation ability of our model.",
"In order to recover the correlation across different groups, we add representation rearrangement to make up for representation ability.",
"We have shown that with same width of recurrent layer, our architecture with group number K achieves a compact model, which has K times less number of parameters than the standard recurrent network.",
"Therefore with same number of parameters, group recurrent networks can provide more possibility to try more complex model without any 802 Figure 3: Illustration of the implementation of representation rearrangement with basic tensor operation.",
"additional computation and parameter overhead.",
"Given a standard recurrent neural network, we can construct a corresponding group recurrent neural network with same number of parameters, but with K times wider, or with K times deeper.",
"A factor smaller than K would make our networks still effective than standard recurrent network, but with wider and/or deeper recurrent layers.",
"This could somehow compensate the potential performance drop due to the aggressive sparsity when group number is too large.",
"Therefore, our architecture provides large model space to find a better tradeoff between parameter and performance given a fixed resource budget.",
"And our model is a more effective RNN architecture when the network goes deeper and wider.",
"At last, we note that our architecture focuses on improving the efficiency of recurrent layers.",
"Thus the whole parameter and computational cost reduction depend on the ratio of recurrent layer in the entire network.",
"Consider a text classification task, a often used RNN model would introduce an embedding layer for the input tokens and a softmax layer for the output, making the parameter reduction and speedup for the whole network is not strictly linear with the group number.",
"However, we argue that for deeper and/or wider RNN whose recurrent layers dominate the parameter and computational cost, our method would enjoy more efficiency improvement.",
"In this section, we present results on three sequence learning tasks to show the effectiveness of our method: 1).",
"language modeling; 2).",
"neural machine translation; 3).",
"abstractive summarization.",
"For evaluating the effectiveness of our approach, we perform language modeling over Penn Tree-bak (PTB) dataset (Marcus et al., 1993).",
"We use the data preprocessed by (Mikolov et al., 2010) 1 , which consists of 929 K training words, 73 K validation words, and 82 K test words.",
"It has 10 K words in its vocabulary.",
"We compare our method (named Group LSTM) with the standard LSTM baseline (Zaremba et al., 2014) and its two variants with Bayesian dropout (named LSTM + BD) (Gal and Ghahramani, 2016) and with word tying (named LSTM + WT) (Press and Wolf, 2017).",
"Following the big model settings in (Zaremba et al., 2014; Gal and Ghahramani, 2016; Inan et al., 2016) , all experiments use a two-layer LSTM with 1 , 500 hidden units and an embedding of size 1 , 500 .",
"We set group number 2 in this experiment since PTB is a relative simple dataset.",
"We use Stochastic Gradient Descent (SGD) to train all models.",
"Results We compare the word level perplexity obtained by the standard LSTM baseline models and our group variants, in which we replace the standard LSTM layer with our group LSTM layer.",
"As shown in Table 1, Group LSTM achieves comparable performance with the standard LSTM baseline, but with a 27 % parameter reduction.",
"A variant using Bayesian dropout (BD) is proposed by (Gal and Ghahramani, 2016) to prevent over-fitting and improve performance.",
"We test our model with LSTM + BD, achieving similar results with above comparison.",
"Finally, we compare our model with the recently proposed word tying (WT) technology, which ties input embedding and output embedding with same weights.",
"Our model achieves even better perplexity than the results reported by (Press and Wolf, 2017).",
"Since word tying reduces the number of parameters of embedding and softmax layers, thus improving the ratio of LSTM layer parameter.",
"Our method achieves a 35 % parameter reduction.",
"We then study our model in neural machine translation.",
"We conduct experiments on two translation tasks, German-English task (De-En for short) and English-German task (En-De for short).",
"For De-En translation, we use data from the De-En ma-1 http://www.fit.vutbr.cz/imikolov/ rnnlm/simple-examples.tgz 803 Model Parameters Validation Set Test Set LSTM (Zaremba et al., 2014) 66M 82.2 78.4 2 Group LSTM 48M 82.0 78.6 LSTM + BD (Gal and Ghahramani, 2016) 66M 77.9 75.2 2 Group LSTM + BD 48M 79.9 75.8 LSTM + WT (Press and Wolf, 2017) 51M 77.4 74.3 2 Group LSTM + WT 33M 76.8 73.3 LSTM + BD + WT (Press and Wolf, 2017) 51M 75.8 73.2 2 Group LSTM + BD + WT 33M 75.6 71.8 Table 1: Single model complexity on validation and test sets for the Penn Treebank language modeling task.",
"chine translation track of the IWSLT 2014 evaluation campaign (Cettolo et al., 2014).",
"We follow the pre-processing described in previous works (Wu et al., 2017).",
"The training data comprises about 153 K sentence pairs.",
"The size of validation data set is 6 , 969 , and the test set is 6 , 750 .",
"For En-De translation, we use a widely adopted dataset (Jean et al., 2015; Wu et al., 2016).",
"Specifically, part of data in WMT'14 is used as the training data, which consists of 4.5M sentences pairs.",
"newstest2012 and newstest2013 are concatenated as the validation set and newstest2014 acts as test set.",
"These two datasets are preprocessed by byte pair encoding (BPE) with vocabulary of 25 K and 30 K for De-En and En-De respectively, and the max length of sub-word sentence is 64 .",
"Our model is based on RNNSearch model (Bah-danau et al., 2014), but replacing the standard LSTM layer with our group LSTM layer.",
"Therefore, we name our model as Group RNNSearch model.",
"The model is constructed by LSTM encoder and decoder with attention, where the first layer of encoder is bidirectional LSTM.",
"For De-En, we use two layers for both encoder and decoder.",
"The embedding size is 256 , which is same as the hidden size for all LSTM layers.",
"As for EnDe, we use four layers for encoder and decoder 2 .",
"The embedding size is 512 and the hidden size is 1024 3 .",
"All the models are trained by Adadelta (Zeiler, 2012) with initial learning rate 1 .",
"0 .",
"The gradient is clipped with threshold 2 .",
"5 .",
"The mini-batch size is 32 for De-En and 128 for En-De.",
"We use dropout (Srivastava et al., 2014) with rate 0 .",
"1 for all layers except the layer before softmax with 0 .",
"5 .",
"We halve the learning rate according to the validation performance.",
"2 For easy to implement, we still keep the first layer with attention computation in the decoder as original LSTM layer.",
"3 In our implementation, suppose the hidden size is d , after the first bi-directional LSTM layer in the encoder, the hidden size of the above LSTM layers in the encoder should be 2 d .",
"Results We compute tokenized case-sensitive BLEU (Papineni et al., 2002) 4 score as evaluation metric.",
"For decoding, we use beam search (Sutskever et al., 2014) with beam size 5 .",
"From Table 2, we can observe that on De-En task, Group RNNSearch models achieve comparable or better BLEU score compared with the RNNSearch but with much less number of parameters.",
"Specifically, with group number 2 and 4, we achieve about 28% and 43% parameter reduction of recurrent layers respectively.",
"Note that our results also outperform the state-of-the-art result reported in NPMT (Huang et al., 2017).",
"The En-De translation results are shown in Table",
"3. We compare our Group RNNSearch models with Google's GNMT system (Wu et al., 2016) and DeepLAU (Wang et al., 2017).",
"Our 4 4 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/multi-bleu.perl 804 Group RNNSearch model achieves 23 .",
"61 , which is comparable to DeepLAU ( 23 . 80 ).",
"Our 2 Group RNNSearch model achieves a BLEU score of 23 .",
"93 , slightly less than GNMT ( 24 . 61 ), but outperforms the DeepLAU.",
"More importantly, our Group RNNSearch models decrease more than 30% and 50% RNN parameters with 2 groups and 4 groups respectively compared with GNMT.",
"At last, we valid our approach on abstractive summarization task.",
"We train on the Gigaword corpus (Graff and Cieri, 2003) and pre-process it identically to (Rush et al., 2015; Shen et al., 2016), resulting in 3 .",
"8 M training article-headline pairs, 190 K for validation and 2 , 000 for test.",
"Similar to (Shen et al., 2016), we use a source and target vocabulary consisting of 30 K words.",
"The model is almost same as the one used in De-En machine translation, which is a two layers RNNSearch model, except that the embedding size is 512 , and the LSTM hidden size in both encoder and decoder is 512 .",
"The initial values of all weight parameters are uniformly sampled between ( 0 . 05 , 0 . 05) .",
"We train our Group RNNSearch model by Adadelta (Zeiler, 2012) with learning rate 1 .",
"0 and gradient clipping threshold 1 .",
"5 (Pas-canu et al., 2013b).",
"The mini-batch size is 64 .",
"Results We evaluate the summarization task by commonly used ROUGE (Lin, 2004) F1 score.",
"During decoding, we use beam search with beam size 10.",
"The results are shown in Table 4.",
"From Table 4, we can observe that the performance is consistent with machine translation task.",
"Our Group RNNSearch model achieves comparable results with RNNSearch, and our 2 Group RNNSearch model even outperforms RNNSearch baseline.",
"Besides, we compare with several other widely adopted methods, our models also show strong performance.",
"Therefore, we can keep the good performance even though we reduce the parameters of the recurrent layers by nearly 50% , which greatly proves the effectiveness of our method.",
"In addition to showing that group RNN can achieve competing or better performance with much less number of parameters, we further study the effect of group number to training speed and convergence, and the effect of representation rear-Model",
"rangement to performance.",
"Due to space limitation, we only report results for language modeling on PTB dataset; for other tasks we have similar results.",
"In Figure 4, the left one shows that how number of parameters and training speed vary when group number ranging from 1 to 16.",
"We can see that the number of parameters (of recurrent layers) is reduced linearly when increasing number of groups.",
"In the meantime, we also achieves substantial speed up about throughput when increasing group number.",
"We note that the speedup is sub-linear instead of linear since our method focuses on the speedup on recurrent layers, as discussed in Section 3.2.",
"Besides, we also compare the convergence curve in the right of Figure 4, which shows that our method (almost) doesn't slow down the convergence in terms of epoch number.",
"Considering the throughput speedup of our method, we can accelerate training by a large margin.",
"At last, we study the role that representation rearrangement layer plays in our architecture.",
"We compare Group LSTM with and without representation rearrangement between layers and time steps, with the group number 2 and 4 respectively.",
"From Table 5, we can see that the models with representation rearrangement consistently outperforms the ones without representation rearrangement.",
"This shows the representation rearrangement is critical for group RNN.",
"Improving RNN efficiency for sequence learning is a hot topic in recent deep learning research.",
"For parameter and computation reduction, LightRNN (Li et al., 2016) is proposed to solve big vocabulary problem with a 2-component shared embedding, while our work addresses the parameter redundancy caused by recurrent layers.",
"To speed up RNN, Persistent RNN (Diamos et al., 2016) is proposed to improve the RNN computation throughput by mapping deep RNN efficiently onto GPUs, which exploits GPU's inverted memory hierarchy to reuse network weights over multiple time steps.",
"(Neil et al., 2017) proposes delta networks for optimizing the matrix-vector multiplications in RNN computation by considering the temporal properties of the data.",
"Quasi-RNN (Bradbury et al., 2016) and SRU (Lei and Zhang, 2017) are proposed for speeding up RNN computation by designing novel recurrent units which relax dependency between time steps.",
"Different from these works, we optimize RNN from the perspective of network architecture innovation by adopting a group strategy.",
"There is a long history about the group idea in deep learning, especially in convolutional neural networks, aiming to improve the computation efficiency and parameter efficiency.",
"Such works can date back at least to AlexNet (Krizhevsky et al., 2012), which splits the convolutional layers into 2 independent groups for the ease of model-parallelism.",
"The Inception (Szegedy et al., 2015) architecture proposes a module that employs uniform sparsity to improve the parameter efficiency.",
"Going to the extreme of Inception, the Xception (Chollet, 2016) adopts a depthwise separable convolution, where each spatial convolution only works on a single channel.",
"MobileNet (Howard et al., 2017) uses the same idea for efficient mobile model.",
"IGCNet (Zhang et al., 2017a) and ShuffleNet (Zhang et al., 2017b) also adopt the group convolution idea, and further permute the features across consecutive layers.",
"Similar to these works, we also exploit the group strategy.",
"But we focus on efficient sequence learning with RNN, which, different from CNN, contains an internal memory and an additional temporal direction.",
"In the RNN literature, there is only one paper (Kuchaiev and Ginsburg, 2017), to our best knowledge, exploiting the group strategy.",
"However, this work assumes the features are group independent, thus failing to capturing the inter-group correlation.",
"Our work employs a representational rearrangement mechanism, which avoids the assump-tion and improves the performance, as shown in our empirical experiments.",
"We have presented an efficient RNN architecture for sequence learning.",
"Our architecture employs a group recurrent layer to learn intra-group correlation efficiently, and representation rearrangement layer to recover inter-group correlation for keeping representation ability.",
"We demonstrate our model is more efficient in terms of parameters and computational cost.",
"We conduct extensive experiments on language modeling, neural machine translations and abstractive summarization, showing that our method achieves competing performance with much less computing resource."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"method",
"result",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"objective",
"result"
] |
[
"This paper describes the motivation and development of speech synthesis systems for the purposes of language revitalization.",
"By building speech synthesis systems for three Indigenous languages spoken in Canada, Kanien'kha, Gitksan & SENOEN, we re-evaluate the question of how much data is required to build low-resource speech synthesis systems featuring state-of-the-art neural models.",
"For example, preliminary results with English data show that a FastSpeech2 model trained with 1 hour of training data can produce speech with comparable naturalness to a Tacotron2 model trained with 10 hours of data.",
"Finally, we motivate future research in evaluation and classroom integration in the field of speech synthesis for language revitalization.",
"There are approximately 70 Indigenous languages spoken in Canada, from 10 distinct language families (Rice, 2008).",
"As a consequence of the residential school system and other policies of cultural suppression, the majority of these languages now have fewer than 500 fluent speakers remaining, most of them elderly.",
"Despite this, interest from students and parents in Indigenous language education continues to grow (Statistics Canada, 2016) we have heard from teachers that they are overwhelmed with interest from potential students, and the growing trend towards online education means many students who have not previously had access to language classes now do.",
"Supporting these growing cohorts of students comes with unique challenges for languages with few fluent first-language speakers.",
"A particular concern of teachers is to provide their students with opportunities to hear the language outside of 1 National Research Council Canada 2 University of Edinburgh 3 Queen's University class.",
"Text-to-speech synthesis technology (TTS) shows potential for supplementing text-based language learning tools with audio in the event that the domain is too large to be recorded directly, or as an interim solution pending recordings from first-language speakers.",
"Development of TTS systems in this context faces several challenges.",
"Most notable is the usual assumption that neural speech synthesis models require at least tens of hours of audio recordings with corresponding text transcripts to be trained adequately.",
"Such a data requirement is far beyond what is available for the languages we are concerned with, and is difficult to meet given the limited time of the relatively small number of speakers of these languages.",
"The limited availability of Indigenous language speakers also hinders the subjective evaluation methods often used in TTS studies, where naturalness of synthetic speech samples is judged by speakers of the language in question.",
"In this paper, we re-evaluate some of these challenges for applying TTS in the low-resource context of language revitalization.",
"We build TTS systems for three Indigenous languages of Canada, with training data ranging from 25 minutes to 3.5 hours, and confirm that we can produce acceptable speech as judged by language teachers and learners.",
"Outputs from these systems could be suitable for use in some classroom applications, for example a speaking verb conjugator.",
"It is no secret that the majority of the world's languages are in crisis, and in many cases this crisis is even more urgent than conservation biolo-gists' dire predictions for flora and fauna (Suther-land, 2003).",
"However, the doom and gloom' rhetoric that often follows endangered languages over-represents vulnerability and under-represents 7346 the enduring strength of Indigenous communities who have refused to stop speaking their languages despite over a century of colonial policies against their use (Pine and Turin, 2017).",
"Continuing to speak Indigenous languages is often seen as a political act of anti-colonial resistance.",
"As such, the goals of any given language revitalization effort extend far beyond memorizing verb paradigms to broader goals of nationhood and self-determination (Pitawanakwat, 2009 McCarty, 2018).",
"Language revitalization programs can also have immediate and important impacts on factors including community health and wellness (Whalen et al., 2016 Oster et al., 2014).",
"There is a growing international consensus on the importance of linguistic diversity, from the Truth & Reconciliation Commission of Canada (TRC) report in 2015 which issued nine calls to action related to language, to 2019 being declared an International Year of Indigenous Languages by the UN, and 2022-2032 being declared an International Decade of Indigenous Languages.",
"From 1996 to 2016, the number of speakers of Indigenous languages increased by 8% (Statistics Canada, 2016).",
"These efforts have been successful despite a lack of support from digital technologies.",
"While opportunities may exist for technology to assist and support language revitalization efforts, these technologies must be developed in a way that does not further marginalize communities (Brinklow et al., 2019 Bird, 2020).",
"Our interest in speech synthesis for language revitalization was sparked during user evaluations of Kawennn:nis (lit. it makes words'), a Kanien'kha verb conjugator (Kazantseva et al., 2018) developed in collaboration between the National Research Council Canada and the Onkwawenna Kentyohkwa adult immersion program in Six Nations of the Grand River in Ontario, Canada.",
"Kawennn:nis models a pedagogically-important subset of verb conjugations in XFST (Beesley and Karttunen, 2003), and currently produces 247,450 unique conjugations.",
"The pronominal system is largely responsible for much of this productivity, since in transitive paradigms, agent/patient pairs are fused, as illustrated in Figure 1.",
"In user evaluations of Kawennn:nis, students often asked whether it was possible to add audio to the tool, to model the pronunciation of unfamil-(1) Se nn:wes you.to.it -like-habitual You like it .' (2) Take nn:wes you.to.me -like-habitual You like me .' Figure 1: An example of fusional morphology of agent/patient pairs in Kanien'kha transitive verb paradigms (from Kazantseva et al., 2018) iar words.",
"Assuming a rate of 200 forms/hr for 4 hours per day, 5 days per week, this would take a teacher out of the classroom for approximately a year.",
"Considering Kawennn:nis is anticipated to have over 1,000,000 unique forms by the time the grammar modelling work is finished, recording audio manually becomes infeasible.",
"The research question that then emerged was what is the smallest amount of data needed in order to generate audio for all verb forms in Kawen-nn:nis'.",
"Beyond Kawennn:nis, we anticipate that there are many similar language revitalization projects that would want to add supplementary audio to other text-based pedagogical tools.",
"The last few years have shown an explosion in research into purely neural network-based approaches to speech synthesis (Tan et al., 2021).",
"Similar to their HMM/GMM predecessors, neural pipelines typically consist of both a network predicting the acoustic properties of a sequence of text and a vocoder.",
"The feature prediction network must be trained using parallel speech/text data where the input is typically a sequence of characters or phones that make up an utterance, and the output is a sequence of fixed-width frames of acoustic features.",
"In most cases the predictions from the TTS model are log Mel-spectral features and a vocoder is used to generate the waveform from these acoustic features.",
"Much of the previous work on low resource speech synthesis has focused on transfer learning that is, pre-training' a network using data from a language that has more data, and then fine-tuning' using data from the low-resource language.",
"One of the problems with this approach is that the input space often differs between languages.",
"As the 7347 inputs to these systems are sequences of characters or phones, and as these sequences are typically one-hot encoded, it can be difficult to devise a principled method for transferring weights from the source language network to the target if there is a difference between the character or phone inventories of the two languages.",
"Various strategies have emerged for normalizing the input space.",
"For example, Demirsahin et al. (2018) propose a unified inventory for regional multilingual training of South Asian languages, while Tu et al. (2019) compare various methods to create mappings between source and target input spaces.",
"Another proposal is to normalize the input space between source and target languages by replacing one-hot encodings of text with multi-hot phonological feature encodings (Gutkin et al., 2018 Wells and Richmond, 2021).",
"There is extremely little published work on speech synthesis for Indigenous languages in Canada (and North America generally).",
"A statistical parametric speech synthesizer using Simple4All was recently developed for Plains Cree (Harrigan et al., 2019 Clark, 2014).",
"Although it was unpublished, two highschool students 1 created a statistical parametric speech synthesizer for Kanien'kha by adapting eSpeak (Duddington and Dunn, 2007).",
"We know of no other attempts to create speech synthesis systems for Indigenous languages in Canada.",
"Elsewhere in North America, a Tacotron2 system has been built for Cherokee (Conrad, 2020), and some early work on concatenative systems for Navajo was discussed in a technical report (Whitman et al., 1997), as well as on Rarmuri (Urrea et al., 2009).",
"Although the term low resource' is used to describe a wide swath of languages, most Indigenous languages in Canada would be considered low-resource' in multiple senses of the word, having both a low amount of available data (annotated or unannotated), and a relatively low number of speakers.",
"Most Indigenous languages lack transcribed audio corpora, and fewer still have such data recorded in a studio context.",
"Due to the limited number of speakers, creating these resources is 1 https://wiki .",
"non-trivial: there are limited amounts of text from which a speaker could read, and there are few people available who are sufficiently literate in the languages to transcribe recorded audio.",
"Re-focusing speakers' limited time to these tasks presents a significant opportunity cost they are often already over-worked and over-burdened in under-funded and under-resourced language teaching projects.",
"As mentioned in 2.1, language technology projects that aim to assist language revitalization and reclamation efforts must be centered around the primary goals of those efforts and ensure that the means of developing the technology do not distract or work against the broader sociopolitical goals.",
"A primary stress point for many natural language processing projects involving Indigenous communities surrounds issues of data sovereignty.",
"It is important that communities direct the development of these tools, and maintain control, ownership, and distribution rights for their data, as well as for the resulting speech synthesis models (Kee-gan, 2019 Brinklow, 2021).",
"In keeping with this, the datasets described in this paper are not being released publicly at this time.",
"To test the feasibility of developing speech synthesis systems for Indigenous languages, we trained models for three unrelated Indigenous languages, Kanien'kha (3.1), Gitksan (3.2), and SENOEN (3.3).",
"Kanien'kha 2 (a.k.a. Mohawk) is an Iroquoian language spoken by roughly 2,350 people in southern Ontario, Quebec, and northern New York state (Statistics Canada, 2016).",
"In 1979 the first immersion school of any Indigenous language in Canada was opened for Kanien'kha, and many other very successful programs have been started since, including the Onkwawenna Kentyohkwa adult immersion program in 1999 (Gomashie, 2019).",
"In the late 1990s, a team of five Kanien'kha translators worked with the Canadian Bible Society to translate and record parts of the Bible one of the speakers on these recordings, Satewas, is still living.",
"Translation runs in Satewas's family, with his great-grandfather also working on Bible translations in the 19th century.",
"Later, a team of four speakers and learners, including this paper's third author, aligned the text and audio at the utterance 2 As there are different variations of spelling, we use the spelling used in the communities of Kahnaw:ke and Kahneset:ke throughout this paper 7348 level using Praat (Boersma and van Heuven, 2001) and ELAN (Brugman and Russel, 2004).",
"While a total of 24 hours of audio were recorded, members of the Kanien'kha-speaking community told us it would be inappropriate to use the voices of speakers who had passed away, leaving only recordings of Satewas's voice.",
"Using a GMM-based speaker ID system (Kumar, 2017), we removed utterances by these speakers, then removed utterances that were outliers in duration (less than 0.4s or greater than 11s) and speaking rate (less than 4 phones per second or greater than 15), recordings with an unknown phase effect present, and utterances containing non-Kanien'kha characters (e.g. proper names like Euphrades').",
"Handling utterances with non-Kanien'kha characters would have required grapheme-to-phoneme prediction capable of dealing with multilingual text and code-switching which we did not have available.",
"The resulting speech corpus comprised 3.46 hours of speech.",
"Gitksan 3 is one of four languages belonging to the Tsimshianic language family spoken along the Skeena river and its surrounding tributaries in the area colonially known as northern British Columbia.",
"Traditional Gitksan territory spans some 33,000 square kilometers and is home to almost 10,000 people, with approximately 10% of the population continuing to speak the language fluently (First Peoples' Cultural Council, 2018).",
"As there were no studio-quality recordings of the Gitksan language publicly available, and as an intermediate speaker of the language, the first author recorded a sample set himself.",
"In total, he recorded 35.46 minutes of audio reading isolated sentences from published and unpublished stories (Forbes et al., 2017).",
"The SENOEN language is spoken by the W SNE people on the southern part of the is-land colonially known as Vancouver Island.",
"It belongs to the Coastal branch of the Salish language family.",
"The W SNE community runs a world-famous language revitalization program 4 , and uses 3 We use Lonnie Hindle and Bruce Rigsby's spelling of the language, which, with the use of k' and a' is a blend of upriver (gigeenix) and downriver (gyets) dialects 4 https://wsanecschoolboard .",
"an orthography developed by the late SENOEN speaker and W SNE elder Dave Elliott.",
"While the community of approximately 3,500 has fewer than 10 fluent speakers, there are hundreds of learners, many of whom have been enrolled in years of immersion education in the language (First Peoples' Cultural Council, 2018).",
"As there were no studio-quality recordings of the SENOEN language publicly available, we recorded 25.92 minutes of the language with PEN David Underwood reading two stories originally spoken by elder Chris Paul.",
"Given the motivation and context for language revitalization-based speech synthesis, a number of research questions follow.",
"Namely, how much data is required in order to build a system of reasonable pedagogical quality?",
"How do we evaluate such a system?",
"And, how is the resulting system best integrated into the classroom?",
"In 4.1, we discuss the difficulty of evaluating TTS systems in low-resource settings.",
"We then discuss preliminary results for English and Indigenous language TTS which show that acceptable speech quality can be achieved with much less training data than usually considered for neural speech synthesis (4.2).",
"Finally, we suggest possible directions for pedagogical integration in section 4.4.",
"One of the most significant challenges in researching speech synthesis for languages with few speakers is evaluating the models.",
"For some Indigenous languages in Canada, the total number of speakers of the language is less than the number typically required for statistical significance in a listening test (Wester et al., 2015).",
"While the number of speakers in these conditions is sub-optimal for statistical analysis, we have been told by the communities we work with that the positive assessment of a few widely respected and community-engaged language speakers would be practically sufficient to assess the pedagogical value of speech models in language revitalization contexts.",
"For the experiments described in this paper, we ran listening tests for both Kanien'kha and Gitksan with speakers, teachers, and learners, but were not able to run any such tests for SENOEN due to very few speakers with already busy schedules.",
"Mel cepstral distortion (MCD, Kubichek, 1993), we do not believe they should be considered reliable proxies for listening tests.",
"Future research on speech synthesis for languages with few speakers should prioritize efficient and effective means of evaluating results.",
"In many cases, including in the experiment described in 4.2, artificial data constraints can be placed on a language with more data, like English, to simulate a low-resource scenario.",
"While this technique can be insightful and it is tempting to draw universal conclusions, English is linguistically very different from many of the other languages spoken in the world.",
"Accordingly, we should be cautious not to assume that results from these types of experiments will necessarily transfer or extend to genuinely low-resource languages.",
"The first question to answer is whether our Indigenous language corpora ranging from 25 minutes to 3.46 hours of speech are sufficient for building neural speech synthesizers.",
"Due to the prominence of Tacotron2 (Shen et al., 2018), it seems that many people have assumed that the data requirements for training any neural speech synthesizer of similar quality must be the same as the requirements for this particular model.",
"As a result, some researchers still choose to implement either concatenative or HMM/GMM-based statistical parametric speech synthesis systems in low-resource situations based on the assumption that a sufficiently large corpus [for neural TTS] is unavailable (James et al., 2020, p. 298).",
"We argue that attention-based models such as Tacotron2 should not be used as a benchmark for data requirements among all neural TTS methods, as they are notoriously difficult to train and unnecessarily inflate training data requirements.",
"Tacotron2 is an autoregressive model, meaning it predicts the speech parameters y t from both the input sequence of text x and the previous speech parameters y 1 , ..., y t 1 .",
"Typically, the model is trained with teacher-forcing', where the autoregressive frame y t 1 passed as input for predicting y t is taken from the ground truth acoustic features and not the prediction network's output from the previous frame y t 1 .",
"As discussed by Liu et al. (2019), such a system might learn to copy the teacher forcing input or disregard the text entirely, which could still optimize Tacotron2's root mean square error function over predicted acoustic features, but result in an untrained or degenerate attention network which is unable to properly generalize to new inputs at inference time when the teacher forcing input is unavailable.",
"Attention failures represent a characteristic class of errors for models such as Tacotron2, for example skipping or repeating words from the input text (Valentini-Botinhao and King, 2021).",
"There have been many proposals to improve training of the attention network, for example by guiding the attention or using a CTC loss function to respect the monotonic alignment between text inputs and speech outputs (Tachibana et al., 2018 Liu et al., 2019 Zheng et al., 2019 Glge, 2020).",
"As noted by Liu et al. (2019), increasing the so-called reduction factor' which applies dropout to the autoregressive frames can also help the model learn to rely more on the attention network than the teacher forcing inputs, but possibly at the risk of compromising synthesis quality.",
"FastSpeech2 (Ren et al., 2021), and similar systems like FastPitch (acucki, 2021), present an alternative to Tacotron2-type attentive, autoregressive systems with similar listening test results and without the characteristic errors related to attention.",
"Instead of modelling duration using attention, they include an explicit duration prediction module trained on phone duration targets extracted from the training data.",
"For the original FastSpeech, target phone durations derived from the attention weights of a pre-trained Tacotron2 system were used to provide phone durations (Ren et al., 2019).",
"In low-resource settings, however, there might not be sufficient data to train an initial Tacotron2 in the target language in the first place.",
"For FastSpeech2, phone duration targets are instead extracted using the Montreal Forced Aligner (MFA, McAuliffe et al., 2017), trained on the same data as used for TTS model training.",
"We have found MFA can provide suitable alignments for our target languages, even with alignment models being trained on only limited data.",
"Faster convergence of text-acoustic feature alignments has been found to speed up overall encoder-decoder TTS model training, as stable alignments provide a solid foundation for further training of the decoder.",
"Badlani et al. (2021) show this by adding a jointly-learned alignment framework to a Tacotron2 architecture, reducing time 7350 0 100 200 300 400 500 Decoder timestep 0 10 20 30 40 50 60 70 80 E n c o d e r t i m e s t e p 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8",
"to convergence.",
"In contrast, they found that replacing MFA duration targets in FastSpeech2 training offers no benefit forced alignment targets already provide enough information for more time-efficient training compared to an attention-based Tacotron2 system.",
"Relieving the burden of learning an internal alignment model also opens the door to more data-efficient training.",
"For example, Perez-Gonzalez-de-Martos et al. (2021) submitted a non-attentive model trained from forced alignments to the Blizzard Challenge 2021, where their system was found to be among the most natural and intelligible in subjective listening tests despite only using 5 hours of speech all other submitted systems included often significant amounts of additional training data (up to 100 hours total).",
"To investigate the effects of differing amounts of data on the attention network, and in preparation for training systems with our limited Indigenous language data sets, we trained five Tacotron2 models on incremental partitions of the LJ Speech corpus of American English (Ito and Johnson, 2017).",
"We used the NVIDIA implementation 5 with default hyperparameters apart from a reduced batch size of 32 to fit the memory capacity of our GPU resources.",
"We artificially constrained the training data such that the first model saw only the first hour of data from the shuffled corpus, the second model that same first hour plus another two hours (3 total) etc., so that the five models were trained on 1, 5 https://github .",
"3, 5, 10 and 24 (full corpus) hours of speech.",
"The models were trained for 100k steps and, as seen in Figure 2, using up to 5 hours of data the attention mechanism does not learn properly, resulting in degenerate outputs.",
"For comparison, we trained seven FastSpeech2 models with batch size 16 for 200k steps on 15 and 30 minute, 1, 3, 5, 10 and 24 hour incremental partitions of LJ Speech.",
"Our model 6 is based on an open-source implementation (Chien, 2021), which adds learnable speaker embeddings and a decoder postnet to the original model, as well as predicting pitch and energy values at the phone rather than frame level.",
"We also added learnable language embeddings for supplementary experiments in cross-lingual fine-tuning while not reported in this paper, we refer the interested reader to Pine (2021) for discussion of these experiments.",
"Motivated by concerns of efficiency in model training and inference, and the possibility of overfitting a large model to limited amounts of data, we further modified the base architecture to match the Light-Speech model presented in Luo et al. (2021).",
"We removed the energy adaptor, replaced the convolutional layers in the encoder, decoder and remaining variance predictors with depthwise separable convolutions (Kaiser et al., 2018) and matched encoder and decoder convolutional kernel sizes with Luo et al. (2021).",
"This reduced the number of model parameters from 35M 7 to 11.6M without noticeable change in voice quality and sped up train-6 https://github .",
"ing by 33% on GPU or 64% on CPU.",
"For additional discussion of the accessibility benefits of these changes with respect to Indigenous language communities, see Appendix A. 4.2.3 Results We conducted a short (10-15 minute) listening test to compare the two Tacotron2 models that trained properly (10h, full) against the seven FastSpeech2 models.",
"We recruited 30 participants through Prolific, and presented each with four MUSHRA-style questions where they were asked to rank the 9 voices along with a hidden natural speech reference (ITU-R, 2003).",
"MUSHRA-style questions were used as a practical way to evaluate this large number of models.",
"While it only took 30 minutes to recruit 30 participants using Prolific, the quality of responses was quite varied.",
"We rejected two outright as they seemingly did not listen to the stimuli and left the same rankings for every voice.",
"Even still, there was a lot of variation in responses from the remaining participants, as seen in Figure",
"3. We tested for significant differences between pairs of voices using Bonferroni-corrected Wilcoxon signed rank tests.",
"Pairwise test results are summarized in the heat map of their p-values in Figure",
"4. In the results from the pairwise analysis, we can see that natural speech is rated as significantly more natural than all synthetic speech samples.",
"Naturalness ratings for the FastSpeech2 voices trained on 15m and 30m of data are significantly lower than all other voices, and significantly different from each other.",
"The results for the remaining Ref FS2 15m FS2 30m FS2 1hr FS2 3hr FS2 5hr FS2 10hr FS2 Full TT2 10hr FS 2 15 m FS 2 30 m FS 2 1h r FS 2 3h r FS 2 5h r FS 2 10h r FS 2 F u ll TT 2 10h r TT 2 F u ll 0.00 0.01 0.02 0.03 0.04 0.05 pvalue Figure 4: Pairwise Bonferroni-corrected Wilcoxon signed rank tests between each pair of voices.",
"This is a relevant and important finding for low-resource speech synthesis because it shows that a FastSpeech2 voice built with 3 hours of data can achieve subjective naturalness ratings which are not significantly different from a Tacotron2 voice built with 24 hours of data.",
"Similarly, the results of the listening test for our FastSpeech2 voice built with 1 hour of data are not significantly different from our Tacotron2 voice built with 10 hours of data.",
"Additionally, while all the FastSpeech2 voices were intelligible, all Tacotron2 models trained with less than 10 hours of data produced unintelligible speech.",
"Despite the difficulty in evaluation (4.1), we built and evaluated a number of TTS systems for the Indigenous languages described in 3.",
"We had a baseline concatenative model available for Kanien'kha that we had previously built using Festival and Multisyn (Taylor et al., 1998 Clark et al., 2007).",
"Additionally, we trained cold-start FastSpeech2 models for each language, as well as models fine-tuned for 25k steps from a multilin-7352 gual, multispeaker FastSpeech2 model pre-trained on a combination of VCTK (Yamagishi et al., 2019), Kanien'kha and Gitksan recordings.",
"A rule-based mapping from orthography to pronunciation form was developed for each language using the g2p' Python library in order to perform alignment and synthesis at the phone-level instead of character-level (Pine et al., Under Review).",
"We carried out listening test evaluations of Gitksan and Kanien'kha models.",
"Participants were recruited by contacting teachers, learners and linguists with at least some familiarity with the languages.",
"For the Kanien'kha listening test, 6 participants were asked to answer 20 A/B questions comparing synthesized utterances from the various models.",
"We used A/B tests for more targeted comparisons between different systems, namely cold-start vs. fine-tuned and neural vs. concatenative.",
"Results showed that 72.2% of A/B responses from participants preferred our FastSpeech2 model over our baseline concatenative model.",
"In addition, 81.7% of A/B responses from participants preferred the cold-start to the model fine-tuned on the multispeaker, multi-lingual model, suggesting that the transfer learning approach discussed in 2.3 might not be necessary for models with explicit durations such as FastSpeech2 since they are relieved of the burden to learn an implicit model of duration through attention from limited data.",
"For the Gitksan listening test, we did not build a concatenative model as with Kanien'kha and so we were not comparing different models, but rather just gathering opinions on the quality of the cold-start FastSpeech2 model.",
"Accordingly, 10 MOS-style questions were presented to 12 participants for both natural utterances and samples from our FastSpeech2 model.",
"The model received a 3 .",
"56 0 .",
"26 MOS compared with a MOS for the reference recordings of 4 .",
"63 0 .",
"19 as shown in Figure",
"5. While both Kanien'kha and Gitksan results seem to corroborate our belief that these models should be of reasonable quality despite limited training data, it is difficult to make any conclusive statement given the low number of eligible participants available for evaluation.",
"As the main goal of our efforts here is to eventually integrate our speech synthesis systems into a pedagogical setting, we also asked the 18 people who participated across Kanien'kha and Gitk-0 1 2 3 4 5 R e f P hone Model IDMOS Figure 5: Box plot of MOS results for Gitksan listening test.",
"san listening tests directly whether they approved of the synthesis quality.",
"As seen in Figure 6, participant responses were generally positive full responses are reported in Appendix B. 4.4 Integrating TTS in the Classroom Satisfying the goal of adding supplementary audio to a reference tool like Kawennn:nis can be straightforwardly implemented by linking entries in the verb conjugator to pre-generated audio for the domain from a static server.",
"This implementation also limits the potential of out of domain utterances that might be deemed inappropriate, which is an ethical concern in communities with low num-bers of speakers where the identity of the model' speaker is easily determined.",
"However, the ability to synthesize novel utterances could be pedagogically useful.",
"Students often come into contact with words or sentences which do not have audio, and teachers often have to prepare new thematic word lists or vocabulary lessons that could benefit from a more general purpose speech synthesis solution.",
"In those cases, 7353 with community and speaker input, we might consider what controls would be necessary for the users of this technology.",
"One potential solution is the variance adaptor architecture present in FastSpeech2, allowing for phone-level control of duration, pitch and energy an engaging demonstration of a graphical user interface for the corresponding controls in a FastPitch model is also available.",
"8 We would like to focus further efforts on designing a user interface for speech synthesis systems that satisfies ethical concerns while prioritizing language pedagogy as the fundamental use case.",
"In addition to fine-grained prosodic controls, we would like to explore the synthesis of hyper-articulated speech, as often used by language teachers when modelling pronunciation of unfamiliar words or sounds for students.",
"This style of speech typically involves adjustment beyond the parameters of pitch, duration and energy, and is characterized by more careful enunciation of individual phones than is found in normal speech.",
"This problem has parallels to the synthesis of Lombard speech (Hu et al., 2021), as used to improve intelligibility by speakers who find themselves in noisy environments.",
"In this paper, we presented the first neural speech synthesis systems for Indigenous languages spoken in Canada.",
"Subjective listening tests showed encouraging results for the naturalness and acceptability of voices for two languages, Kanien'kha and Gitksan, despite limited training data availability (3.5 hours and 35 minutes, respectively).",
"More extensive evaluation on English shows that the FastSpeech2 architecture can produce speech with similar quality to a Tacotron2 system using a fraction of the amount of speech usually considered for neural speech synthesis.",
"Notably, a FastSpeech2 voice trained on 1 hour of English speech achieved subjective naturalness ratings not significantly different from a Tacotron2 voice using 10 hours of data, while a 3-hour FastSpeech2 system showed no significant difference from a 24-hour Tacotron2 voice.",
"We attribute these results to the fact that FastSpeech2 learns input token durations from forced alignments, rather than jointly learning to align linguistic inputs to acoustic features alongside the acoustic feature prediction task as in attention-8 https://fastpitch .",
"based architectures such as Tacotron2.",
"Given forced alignments of sufficient quality, which we found to be achievable even by training a Montreal Forced Aligner model only on our limited Indigenous language training data, this makes for more data-efficient training of neural TTS systems than has generally been explored in previous work.",
"These findings show great promise for future work in low-resource TTS for language revitalization, especially as they come from systems trained from scratch on such limited data, rather than pre-training on a high-resource language and subsequent fine-tuning on limited target language data.",
"We would like to gratefully acknowledge the many people who worked to record the audio for the speech synthesis systems described in this project.",
"In particular, Satewas Harvey Gabriel, and PEN David Underwood.",
"Much of the text and experimentation related to this paper was submitted as partial fulfillment of the first author's M.Sc.",
"dissertation at the University of Edinburgh (Pine, 2021).",
"This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences."
] | [
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Early stopping, which is widely used to prevent overfitting, is generally based on a separate validation set.",
"However, in low resource settings, validation-based stopping can be risky because a small validation set may not be sufficiently representative, and the reduction in the number of samples by validation split may result in insufficient samples for training.",
"In this study, we propose an early stopping method that uses unlabeled samples.",
"The proposed method is based on confidence and class distribution similarities.",
"To further improve the performance, we present a calibration method to better estimate the class distribution of the unlabeled samples.",
"The proposed method is advantageous because it does not require a separate validation set and provides a better stopping point by using a large unlabeled set.",
"Extensive experiments are conducted on five text classification datasets and several stop-methods are compared.",
"Our results show that the proposed model even performs better than using an additional validation set as well as the existing stop-methods, in both balanced and imbalanced data settings.",
"Our code is available at https://github.",
"com/DMCB-GIST/BUS-stop .",
"Early stopping, a form of regularization, is a widely used technique to prevent a model from over-fitting (Yao et al., 2007; Zhang et al., 2017).",
"It is generally based on a separate validation set (Goodfel-low et al., 2016).",
"While monitoring the validation performance during training, the training process stops when the validation error starts to increase.",
"Validation-based early stopping is advantageous because it is easy to implement and can be interpreted directly (Prechelt, 1998).",
"In a scenario where sufficient labeled data are available, the use of a validation set is generally preferred (Goodfellow et al., 2016).",
"However, when Corresponding author only a few labeled data exist, a tradeoff problem is encountered (Kann et al., 2019; Choi and Lee, 2021).",
"For example, although the usage of a relatively large validation set enables more reliable estimation, the number of samples for training becomes insufficient.",
"Conversely, if small fractions of the samples are assigned to the validation set, the stopping point becomes ambiguous because the small validation set is not representative enough.",
"Early stopping is more important in a low resource setting because the prediction accuracy fluc-tuates highly during training.",
"Such high fluctuations render it challenging when to stop the model.",
"One way to mitigate these fluctuations is to use sufficient training data.",
"In this context, training all the available samples would be more effective, and for this purpose, an appropriate stopping point should be determined without validation split.",
"However, this has not been extensively studied.",
"Duvenaud et al. (2016) and Mahsereci et al. (2017) proposed gradient-based stop-methods and applied statistical inference on the training samples.",
"Lee and Chung (2021) suggested the usage of local intrinsic dimensionality (LID) for early stopping.",
"In addition, some studies treat the stopping epoch as a hyperparameter: the stopping epoch is obtained by grid-search or averaging in cross validation (Choi and Lee, 2021).",
"These methods allow the training of all the labeled samples.",
"However, they do not consider the task-related performance metrics (e.g., accuracy) during training, and the LID and gradient-based stop criteria have not been commonly used in natural language processing (NLP).",
"Furthermore, gradient-based stop-criteria depend on the training samples, the size of which may still be small to be representative.",
"In this study, we propose an early stop ping method b ased on u nlabeled s amples (BUS-stop).",
"We are motivated by the following two considerations:",
"(i) The probabilities of the predicted class label (i.e., the prediction confidences) can serve as 708 Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics Volume 1: Long Papers , pages 708 718 May 22-27, 2022 c (cid:13) 2022 Association for Computational Linguistics an indicator for over-fitting or under-fitting.",
"(ii) In a better model, the output class distribution is more likely to be closer to the class distribution of the true labels.",
"To incorporate these two assumptions, two stop criteria are proposed, and combined in the BUS-stop method.",
"Our method monitors the prediction results of unlabeled samples during training and utilizes them for determining the stop-criteria.",
"The first proposed stop-criterion is based on confidence similarity (conf-sim).",
"The model stops when the prediction confidences for the unlabeled samples are most similar to the reference confidences, which are precalculated on the labeled set with cross-validation.",
"Conf-sim is observed to reflect the long-term trend of the loss curve, and thereby assist in preventing over-training.",
"The second stop criterion is based on the class distribution similarity (class-sim).",
"This criterion stops the model when the predicted class distribution on the unlabeled set is most similar to the pre-estimated distribution.",
"To this end, we present a novel estimation method for the true class distribution, which calibrates the predicted distribution by extrapolation such that it is closer to the true distribution.",
"Class-sim is observed to reflect the short-term trend of the accuracy.",
"Our method requires several retraining steps to obtain the reference confidences for conf-sim and the estimated class distribution for class-sim.",
"The BUS-stop method that combines class-sim and conf-sim includes the advantages of both, and thereby performs with better accuracy and loss compared to each (class-sim and conf-sim).",
"The following characteristics of our method contribute to performance improvement.",
"Our method does not require a separate validation set; hence, all the labeled samples can be trained.",
"Training can stop at a more generalized model, using a large unlabeled set.",
"The proposed stop-criteria, conf-sim and class-sim, consider two performance metrics, namely, the loss and accuracy.",
"Our contributions are summarized as follows: We propose BUS-stop, an early stopping method, based on unlabeled samples.",
"BUS-stop can stop the training at a more generalized model, and the performance is better even than using an additional validation set.",
"Furthermore, we present a calibration method to better estimate the class distribution.",
"This method calibrates the output class distribution to render it closer to the true distribution, improving the class-sim performance.",
"Extensive experiments are conducted on five popular text classification datasets in English.",
"Comparison with several stop-methods demonstrates that the proposed method outperforms these existing stop-methods in both balanced and imbalanced data settings.",
"Prechelt (1998) experimented on 14 different validation-based stop criteria.",
"Prechelt (1998) focused on an issue that the validation error during training may represent many local minima prior to a global optimum.",
"Existing non-validation stop-criteria are generally based on statistical inference.",
"Duvenaud et al. (2016) interpreted stochastic gradient descent in terms of the variational inference and proposed an estimation method for the marginal likelihood of the posterior, which was applied as an early stopping criterion.",
"However, this method requires considerable computation for the Hessian, which is not practical in large models.",
"Mahsereci et al. (2017) also proposed a gradient-related stopping method referred to as evidence-based stopping (EB).",
"The EB-criterion is based on the fast-to-compute local statistics of the computed gradients.",
"The criterion represents whether the gradients of the training samples lie within the expected range.",
"Intrinsic dimensionality (ID), which refers to the minimum number of parameters required to represent a dataset, has been used for analyzing the training or redundancy of neural networks (Amsaleg et al., 2015).",
"LID is a version of ID that estimates the subspace dimensions of the local regions.",
"Lee and Chung (2021) found that LID works well as a stopping-criterion in several few-shot image classification datasets.",
"Moreover, LID can be applied to unlabeled samples.",
"Another method involves the pre-estimation of the the number of training epochs by training the model multiple times, such as cross validation (Choi and Lee, 2021); the model can stop at the pre-estimated (PE) stop-epoch when training all the labeled samples.",
"However, these methods have not been commonly studied for NLP tasks and do not consider the performance metrics during training.",
"Furthermore, comparisons among the non-validation stop-methods have not been reported.",
"In this study, we compare our method with the EB, LID, PE, and validation-based stopping methods on five text classification datasets.",
"The method proposed by 709 Algorithm 1 Preliminary stage for BUS-stop Input: Labeled set D l , Unlabeled set D u Output: Sorted output probabilities (cid:126)P l , Calibrated class distribution (cid:126)C u Let Count [1 n l ] = 0 Let P l [1 n l ] = 0 for t { 1 , , T } do Initialize a model, M Split D l into D train and D val at a ratio of r Train the M with ( D train , D val ) M load the M that was the best on D val for x i D val do p i M ( x i ) P l [ i ] = P l [ i ] + p i Count [ i ] = Count [ i ] + 1 end for C u M ( D u ) C val , Acc val M ( D val ) (cid:126) C t u = Calibration ( C u , C val , Acc val ) end for for x i D l do P l [ i ] = P l [ i ] /Count [ i ] end for (cid:126)P l sort P l in ascending (or descending) order (cid:126)C u = (cid:80) Tt =1 (cid:126)C tu /T return (cid:126)P l , (cid:126)C u Duvenaud et al. (2016) was not compared because it involves considerable computational cost.",
"In this section, we describe the proposed method in detail.",
"The main notations used are as follows: D l = { ( x i , y i ) } n l i =1 and D u = { ( x i ) } n u i =1 denote the labeled and unlabeled sets, respectively.",
"x i and y i are the i -th sample and its true label, respectively, and n l and n u are the numbers of labeled and unlabeled samples, respectively.",
"p ij denotes the prediction probability of the j -th class on the i th sample.",
"Let C be the true class distribution of the samples.",
"The output probability (i.e., confidence) p i associated with the predicted label on sample x i and the predicted (i.e., output) class distribution C of the samples are defined as follows: p i =max j ( p ij ) C [ j ] = n data (cid:88) i =1 p ij /n data where j { 1 , , n c } ; n c is the number of classes.",
"The pseudocode for the preliminary stage is summarized in Alg.",
"1. In the preliminary stage, the prediction confidences (cid:126)P l for the labeled samples in D l and the estimated class distribution (cid:126)C u of the unlabeled set D u are calculated.",
"Using D l , the model is reinitialized-and-retrained T -times using a resampling method such as cross-validation.",
"In low-resource settings, such retraining enables more reliable predictions by averaging the results.",
"Each sample in P l is evaluated when the validation loss is the lowest.",
"Each sample should be validated at least once; the prediction confidences are averaged for each sample.",
"P l (and P u in Alg.2 as well) is sorted in order of size for confidence comparison between two different sample sets, D l and D u , in the main stage; we denoted it as (cid:126)P l ( (cid:126)P u for P u ).",
"When retraining T -times, the output class distributions of the unlabeled set D u are obtained and calibrated (this calibration is defined in Section 3.3).",
"Then, the T calibrated class distributions are averaged, resulting in (cid:126)C u .",
"After this stage, (cid:126)P l and (cid:126)C u are used to calculate the similarities for the two stop criteria, conf-sim and class-sim, respectively.",
"After the preliminary stage, we train all the labeled samples and refer to this stage as the main stage.",
"The combined BUS-stop method applied in the main stage is summarized in Alg.",
"2. The unlabeled set is predicted at every epoch during training.",
"Conf-sim The first proposed stop criterion conf-sim S conf represents the similarity of the prediction confidences (cid:126)P u for the unlabeled samples with the reference confidences (cid:126)P l .",
"To calculate the similarity between (cid:126)P u and (cid:126)P l , their dimensions must be the same.",
"We sample (cid:126)P u at regular intervals n u n l such that it is the same size as (cid:126) P l and denoted it as ...",
"P u .",
"We use the Euclidean distance to calculate the similarity, resulting in S conf .",
"Then, the first stop criterion is when S conf has the lowest value, i.e., ...",
"P u is most similar to (cid:126)P l .",
"There is a natural concern that ...",
"P u is likely to produce higher (thus dissimilar) confidences than (cid:126)P l because ...",
"P u is obtained by training all the labeled samples, unlike (cid:126)P l .",
"However, the fact that the confidence for each sample in (cid:126)P l is obtained when the validation error is the lowest can alleviate this concern.",
"Thereby, S conf can be a rough criterion for avoiding underand over-fitting, and can reflect the trend of the loss, based on comparison with the reference confidences.",
"Class-sim The second proposed stop criterion is class-sim, S class .",
"The predicted class distribution C u on the unlabeled set is compared with the estimated class distribution (cid:126)C u from the preliminary stage.",
"The assumption is that a well-trained model can also predict the class distribution more accurately.",
"Therefore, estimation of the true class distribution is crucial.",
"A calibration method that facilitates better estimation of the class distribution is presented in Section 3.3.",
"We use the cosine similarity to calculate the similarity between C u and (cid:126)C u , and obtain S class .",
"The second stop criterion is when S class has the highest value, i.e., C u is most similar to (cid:126)C u .",
"Thereby, S class can reflect the short-term trend of the accuracy because it is more likely that the outputs of a higher accuracy model are closer to the true class distribution.",
"BUS-stop Finally, we combine the two stop-criteria, conf-sim and class-sim, to form the BUS-stop method, as depicted in Alg.",
"2. A simple = (,) = (0.5,0.5) = 1.0 = 0.8 = 0.5 = (0.65,0.35) By Equation (1) and (2), : : 0.5+ 5 3 0.650.5 = 0.75 0.5+ 5 3 0.350.5 = 0.25 Figure 1: Calibration example in binary classification.",
"product of the two stop criteria can be an ineffective stop criterion because the sizes of S conf and S class are relative.",
"Our combined stop-criterion is to save the model with the highest S class among of the epochs from the lowest S conf to the subsequent ( n que 1) -th epoch.",
"This technique enables fine-stopping by considering both S conf and S class , which reflect the long-term and short-term performances, respectively.",
"It is to be noted that early stopping methods should be operated as an ongoing process, and not as a type of post-hoc method.",
"To this end, we use a fixed-size queue Queue , and its size n que as a hyperparameter, as shown in Alg.",
"2. 3.3 Calibration of Class Distribution In this section, we describe the calibration of the predicted class distribution.",
"The calibration method aims to better estimate the true class distribution of the unlabeled set, thereby improving the performance of class-sim, particularly for imbalanced classification.",
"Trained neural networks often involve sampling biases.",
"For example, in binary classification, the prediction results of a model trained with a class ratio a : b tend to follow the distribution of a : b .",
"Thus, when the class distributions are different in the test and training sets, the model performance can deteriorate.",
"Let us suppose the following somewhat ideal and naive situations.",
"Let C u be the true class distribution of the unlabeled set.",
"If the model is perfectly trained with an accuracy of 1.0, the output class distribution will be equal to C u .",
"On the other hand, if the model fails to learn any inference knowledge from training, the model will output the predictions only by its sampling bias; i.e., when the accuracy is the same as the random expectation (de-noted as Acc min , e.g., 0.5 in binary classification), the output class distribution will be equal to the sampling bias B .",
"Thus, the model accuracy can reflect whether the output class distribution is closer to the sampling bias or the true distribution.",
"In the preliminary stage, we obtained the models' proxy accuracy and output class distribution as Acc val 711 Data Class Train Test Len SST-2 2 6.9K 1.8K 19 IMDB 2 25K 25K 231 Elec 2 25K 25K 107 AG-news 4 120K 7.6K 38 DBpedia 14 560K 70K 49 Table 1: Statistics for datasets.",
"and C u , respectively.",
"Assuming that there is an approximate linear relationship, we can define a proportional expression as follows: (1 Acc min ) : ( Acc val Acc min ) ( C u B ) : ( C u B ) (1) We rearrange the above expression in terms of C u : C u B + (1 Acc min ) ( Acc val Acc min ) ( C u B ) (2) Then, we denote the approximation of C u as (cid:126)C u .",
"Considering the class distribution as a vector, Eq.",
"(2) is a type of extrapolation.",
"B can be defined as the class distribution of D train or the predicted distribution in the validation set, C val , of the preliminary stage.",
"In addition, the Acc can be replaced with F1-score.",
"Fig. 1 illustrates an example of our calibration method.",
"We conducted extensive experiments using five text classification datasets.",
"The statistics are summarized in Table",
"1. These datasets have been extensively used in NLP research, and are publicly available.",
"The SST-2 (Socher et al., 2013), IMDB (Maas et al., 2011), and Elec (McAuley and Leskovec, 2013) datasets are used for sentiment analysis.",
"SST-2 and IMDB include movie reviews, and Elec includes reviews on Amazon electronics.",
"AG-news (Zhang et al., 2015) and DBpedia (Zhang et al., 2015) are topic classification tasks for Wikipedia and news articles, respectively.",
"For each dataset, we sampled K labeled samples per class from the training set.",
"K was set to 50 for low-resource settings; we also experimented by varying K { 50 , 100 , 200 , 400 , 800 , 1600 } .",
"We used the test samples as the unlabeled set for each dataset, which is referred to as transductive setting in few-shot classification (Liu et al., 2019).",
"In this section, we describe the various stop-criteria for comparison with our method.",
"EB The EB (Mahsereci et al., 2017) is a criterion based on gradients of training samples.",
"The EB-criterion stops when the following condition is met: 1 |S| DD (cid:88) k =1 [( LS ,k ) 2 k ] > 0 (3) where S represents a sample set, D is the number of parameters, L is the gradients of loss, and subscript k indicates the k -th weight of the total parameters.",
"is the variance estimator, which is calculated as follows: k = 1 ( |S| 1) (cid:88) x S ( l k ( x ) LS ,k ) 2 (4) where l ( x ) is the loss gradient on sample x .",
"Note that LS = 1 |S| (cid:80) x S l ( x ) .",
"For further details, refer Mahsereci et al. (2017).",
"LID Lee and Chung (2021) approximated LID as follows: LID = (cid:88) x D u (cid:20) 1 m m (cid:88) i =1 ln d i ( (cid:126)z ( x )) d m ( (cid:126)z ( x )) (cid:21) 1 (5) where (cid:126)z ( x ) is the representation vector of sample x , and d i is the Euclidean distance of (cid:126)z ( x ) and its i -th nearest neighbor.",
"m is a hyperparameter, which denotes the number of nearest neighbors.",
"The lowest LID is the stop criterion.",
"Val-stop split ( x ) and Val-stop add ( x ) Val-stop denotes validation-based stopping.",
"Val-stop split ( x ) indicates that x validation samples per class are taken from the labeled set.",
"Therefore, K x samples are trained and x samples are validated for each class.",
"Val-stop add ( x ) indicates that x additional samples per class are used for validation; i.e., Val-stop add ( x ) uses a total of K + x labeled samples per class.",
"Val-stop add ( x ) has an unfair advantage because it uses additional labeled samples.",
"PE-stop-epoch The stopping epoch is considered a hyperparameter, which is p ree stimated with cross-validation, as described in Section",
"2. We use four-fold cross-validation.",
"Conf-sim and class-sim can also be used as a single stop-criterion, as mentioned before.",
"We compare the single criteria with the combined BUS-stop criterion.",
"Conf-sim stops when S conf is the lowest, and class-sim stops when S class is the highest.",
"BERT-base (Devlin et al., 2019) was adopted as our text encoder.",
"The Adam optimizer (Kingma and Ba, 2015) was applied for categorical cross-entropy loss (i.e., (cid:80) y i log p i ), and its learning rate was set to 3 e 5 .",
"The dropout (Srivastava et al., 2014) was set to 0 .",
"2 , and the batch size was 16 .",
"All the stop-criteria were evaluated simultaneously for each run to reduce the variance of the estimation.",
"We averaged 10 results in all the experiments.",
"In EB, 64 random training samples were used for S in Eq.",
"(3).",
"In LID, the final vector of the [CLS] token in the BERT model was assigned to (cid:126)z ( x ) in Eq.",
"(5), and the best m was selected from { 5 , 10 , 20 , 50 , 100 } .",
"In BUS-stop, n que in Alg.",
"2 was set to five.",
"Note that K is the number of training samples per class.",
"When K was set to 50 , T and r in the preliminary stage (see Alg. 1) were set to 5 and 1 : 1 , respectively.",
"When K was set above 50 , T and r were set to 4 and 3 : 1 , respectively.",
"In our calibration method, we used C val as B and macro F1-score as the Acc val .",
"Table 2 shows the results when K = 50 for training.",
"It is noted that the original test sets have a balanced class distribution.",
"We also report the loss measure as well as accuracy because loss can imply over-training.",
"As shown in Table 2, our BUS-stop method exhibits the best performance on an average, and the accuracy is better even than Val-stop add (25) , which uses a larger numbers of labeled samples.",
"Note that Val-stop add (25) uses a total of 75 labeled samples per class.",
"The performance of Val-stop split (25) indicates that splitting data for validation can result in poor performance in low-resource settings.",
"LID underperforms compared to the PE-stop-epoch that does not require unlabeled samples.",
"Conf-sim shows the second-best loss on an average.",
"Class-sim underperforms as a stop criterion by itself.",
"However, the BUS-stop method, which combines these two methods, shows better performance than each one on an average.",
"Figure 2 displays the results of conf-sim and class-sim over the epochs.",
"More examples are presented in Appendix A. In Fig. 2, the conf-sim curve is similar to the long-term trend of the loss; however, it does 713 Dataset SST-2 IMDB Elec Average Method Acc F1 Loss Acc F1 Loss Acc F1 Loss Acc F1 Loss Val-stop split (25) 0.788 0.719 0.499 0.732 0.674 0.589 0.783 0.724 0.507 0.768 0.706 0.532 EB 0.846 (cid:39) 0.786 (cid:39) 0.504 0.810 0.749 0.568 0.839 0.789 0.541 0.832 0.775 0.537 LID 0.750 0.698 0.632 0.712 0.668 0.678 0.780 0.728 0.574 0.747 0.698 0.628 PE-stop-epoch 0.843 0.779 0.527 0.821 0.763 0.589 0.843 0.789 0.521 0.836 0.777 0.545 Conf-sim (ours) 0.816 0.754 0.427 0.813 0.750 0.432 (cid:39) 0.835 0.775 0.398 0.821 0.760 0.419 Class-sim (ours) 0.862 (cid:39) 0.797 (cid:39) 0.489 0.844 (cid:39) 0.779 (cid:39) 0.510 0.873 (cid:39) 0.807 (cid:39) 0.409 0.860 0.794 0.469 BUS-stop (ours) 0.860 0.792 0.379 0.849 0.787 0.406 0.876 0.815 0.343 0.861 0.798 0.376 Val-stop add (25) 0.823 0.767 0.412 0.820 0.767 0.457 0.837 0.784 0.407 0.827 0.773 0.426 Table 4: Performance comparison in an imbalanced setting of binary classification tasks.",
"not accurately reflect the short-term fluctuation of the performance from epochs 716.",
"On the other hand, class-sim is observed to be well responsive to the short-term fluctuation of the accuracy, but does not reflect the long-term trend.",
"BUS-stop, which is a combination of these two methods, takes advantage of the shortas well as long-term methods, and thereby facilitates fine stopping.",
"The EB-criterion shows the statistically similar accuracy to the BUS-stop method in most datasets.",
"In the EB-criterion and PE-stop-epoch, the average loss is not good enough compared to the high accuracy.",
"The accuracy and loss show somewhat conflicting results.",
"That was due to over-confidence on the mis-classified samples, caused by over-training.",
"Note that Loss = (cid:80) y i log p i .",
"Overconfidence on the wrong label makes p i close to zero on its true label y i .",
"Thus, excessively low p i can increase the loss drastically.",
"Table 3 lists the over-confidence error (OE); the equation for OE is presented in Thulasi-dasan et al. (2019).",
"This confidence error can be detrimental in various applications, as described by Guo et al. (2017).",
"We experimented with an imbalanced setting in binary classification tasks.",
"For testing, we sampled 1,000 instances in the SST-2 test set, and 10,000 instances each in the IMDB and Elec test sets, with a class distribution of 2 : 8 (negative:positive).",
"The macro F1-score is also reported.",
"Table 4 shows the results when K was set to 50 for training.",
"In most cases, BUS-stop exhibits the best performance with respect to the accuracy as well as loss.",
"In addition, it is noted that BUS-stop outperforms the other methods with a greater margin in an imbalanced setting than in a balanced one (Table 2).",
"It is observed that ratios marked with (cid:39) ' are fewer in the imbalanced setting.",
"Class-sim shows the best or second-best accuracy among the datasets.",
"It is observed that the output class distribution can be an important indicator for a better model.",
"Table 5 shows the results in various imbalanced settings of the SST-2 (both the training and test sets are imbalanced).",
"The number of training samples was fixed to 100 for the different class-distribution settings.",
"In general, when the class distributions of the training and test sets are similar, the results shows better performance for all the three methods, EB, BUS-stop, and Val-stop add (25) .",
"In most cases, BUS-stop consistently outperforms Val-stop add (25) and EB, and the margin is greater when the class distributions are more different between the training and test sets.",
"This result indicates that BUS-stop is robust to imbalanced classification.",
"Impact of the training size Figure 3 indicates the accuracy curve with respect to the training size, using the IMDB dataset.",
"The x values of Val-stop add ( x ) and Val-stop split ( x ) were set to 25, 25, 50, 100, 200, and 400, according to the increase 714 Figure 3: Accuracy by different training sizes in IMDB .",
"in K .",
"It can be observed that the performance of BUS-stop is good in the sufficient-data regime as well.",
"However, the performances of the three stop-criteria converge almost similarly with the increase in the training size.",
"The impact of splitting the samples for validation does not deteriorate the performance when K is greater than 400.",
"Rather, Val-stop split ( x ) performs slightly better when K is 1600.",
"This result suggests that when sufficient labeled data are available, validation-based stopping can be a better choice.",
"Calibration performance In the BUS-stop method, accurate estimation of the class distribution plays a crucial role.",
"The cosine similarity between the class distribution of the test set and the estimated distributions by various estimators are shown in Table 6, where the uncalibrated output distributions ( C u ) and the estimated distributions by the calibration methods, based on the Acc-score (Cali Acc ) and macro F1-score (Cali F 1 ), were compared.",
"When the class distributions are similar between the test and training sets, the performance of C u is slightly better than those of the other es-Figure 4: BUS-stop accuracy for different class distribution estimators in the 16 imbalanced settings depicted in Table",
"6. Method Time complexity Measured time SST-2 ( n u = 1 . 8 k) DBpedia( n u = 70 k) EB g ( n l ) + 0.32 m 0.49 m LID g ( n l ) + p ( n u ) 0.12 m 5.02 m PE-stop-epoch ( T + 1) g ( n l ) 0.43 m 1.14 m BUS-stop ( T + 1) g ( n l ) + p ( n u ) 0.47 m 5.97 m Val-stop add (25) g ( n l ) 0.07 m 0.19 m Table 7: Running time comparison for different stop-criteria.",
"timators.",
"However, the estimation by calibration based on the F1-score (Cali F 1 ) is better on an average, and particularly when the class distributions of the test and training sets are different.",
"Figure 4 indicates the BUS-stop accuracies when each model stops based on the estimated class distribution in Table 6 (the same color corresponds to one cell in Table 6).",
"For example, the yellow colors correspond to the settings in which the class distribution is 2:8 and 8:2 in the training and test sets, respectively.",
"As shown in Fig. 4, the better the class distribution is estimated, the higher is the accuracy of BUS-stop.",
"Such high correlation indicates the importance of the class distribution estimator.",
"This result is consistent with our assumption that the output class distribution of better models will be closer to the true distribution.",
"Running time The running times are not directly comparable owing to the different hyperpa-rameter settings for each method.",
"For example, the BUS-stop and PE-stop-epoch require a separate preliminary stage that consumes additional time.",
"We add up both the times taken in the preliminary stage and main stage.",
"We denote the average running time per epoch as g ( n l ) for training the labeled samples and p ( n u ) for predicting the unlabeled samples.",
"The time complexity and the measured time are shown in Table",
"7. Note that T is 715 Method Val-stop split (25) Val-stop add (25) BUS-stop Selection local global local global local Balanced classification SST-2 0.775 0.785 0.819 0.840 0.831 IMDB 0.746 0.786 0.824 0.838 0.828 Elec 0.781 0.805 0.842 0.852 0.848 AG-news 0.846 0.857 0.867 0.871 0.865 Imbalanced classification SST-2 0.788 0.807 0.823 0.832 0.860 IMDB 0.732 0.757 0.820 0.834 0.849 Elec 0.783 0.820 0.837 0.853 0.876 Table 8: Accuracy by global selection in Val-stop.",
"the number of retrainings in the preliminary stage, which was set to five.",
"The experimental settings are the same as in Section 5.1.",
"The time measurement was conducted on a PC with an Intel Core i7 CPU, 64-GB RAM and an NVIDIA Titan X Pascal GPU.",
"As shown in the expression of time complexity, the running time depends on the numbers of labeled and unlabeled samples, n l and n u , respectively.",
"In DBpedia, which has a large number of unlabeled samples, n u , the LID and BUS-stop methods take the two longest running times.",
"On the other hand, in SST-2, the PE-stop-epoch and BUS-stop methods show the two longest running times, because the n u is relatively small such that the g ( n l ) is more dominant than the p ( n u ) .",
"The BUS-stop requires a longer running time than other methods due to the T -times retraining and the continual prediction on the unlabeled set.",
"To reduce the time, we can adjust the T value or sample a smaller amount of data from the unlabeled set.",
"Limitations The proposed BUS-stop method was designed for classification tasks, and thereby can be applied when the model can output confidences.",
"Regression tasks as well can be addressed by converting into classification problems.",
"The continuous values normalized between 0-1 can be represented as confidences in a binary classification.",
"However, it may be difficult to apply to other more complex tasks (e.g., text summarization).",
"This study is limited to classification tasks.",
"Another limitation is that the BUS-stop, which is a nonvalidation stop-method, cannot make direct comparisons between two models with different runs.",
"Early stopping can be seen as selecting the best resulting model over the epochs.",
"In a similar way, it is also possible to select the best model among multiple runs.",
"We refer to the former as local selection and the latter as global selection.",
"In validation-based stopping, the global selection is simply to select the model with the lowest validation loss over multiple runs.",
"However, the non-validation methods have no clear criterion for this purpose.",
"We repeated training five runs for each and selected the best model among the runs based on validation loss.",
"Other experimental settings are the same as in Section 5.",
"As shown in Table 8, the global selection in validation-based stopping improves performance across the datasets in both balanced and imbalanced settings.",
"However, in the imbalanced setting, the BUS-stop still results in better performance.",
"Note that Val-stop add (25) uses additional labeled samples.",
"We also report that the global selections that are based on the S conf , S class , and LID did not show significant performance improvement in our experiment.",
"The development of non-validation global selection methods is left for future work.",
"Validation-based early stopping can be detrimental in low-resource settings because the reduction in the number of samples by validation split may result in insufficient samples for training.",
"In this study, we proposed an early stopping method called BUS-stop, based on unlabeled samples.",
"Moreover, we proposed a calibration method to better estimate the true class distribution, which was used in the BUS-stop method to improve the performance.",
"We conducted experiments on five popular text classification datasets.",
"The results indicated that BUS-stop outperformed the existing stop-criteria in both balanced and imbalanced settings.",
"In particular, BUS-stop showed robustness to imbalanced classification.",
"The proposed BUS-stop method enables the training of all the available samples and presents a better stopping point using large unlabeled samples.",
"In future, we plan to better exploit the unlabeled samples in self-training schemes.",
"We can also combine BUS-stop and self-training methods.",
"BUS-stop can be used to improve the performance of the initial model, which plays an important role in the final self-training performance.",
"Additionally, we consider applying the BUS-stop to domain adaptation tasks in the future.",
"This research was supported by the Bio-Synergy Research Project (NRF-2016M3A9C4939665) of the Ministry of Science and ICT through the National Research Foundation of Korea (NRF) and the NRF grant funded by the Korean government (Ministry of Science and ICT) (NRF-716",
"2018M3C7A1054932), and partly supported by Institute for Information and communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) [No. 2019-0-01842, Artificial Intelligence Graduate School Program (GIST)]."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"Our goal is procedural text comprehension, namely tracking how the properties of entities (e.g., their location) change with time given a procedural text (e.g., a paragraph about photosynthesis, a recipe).",
"This task is challenging as the world is changing throughout the text, and despite recent advances, current systems still struggle with this task.",
"Our approach is to leverage the fact that, for many procedural texts, multiple independent descriptions are readily available, and that predictions from them should be consistent (label consistency).",
"We present a new learning framework that leverages label consistency during training, allowing consistency bias to be built into the model.",
"Evaluation on a standard benchmark dataset for procedural text, ProPara (Dalvi et al., 2018), shows that our approach significantly improves prediction performance (F1) over prior state-of-the-art systems.",
"We address the task of procedural text comprehension, namely tracking how the properties of entities (e.g., their location) change with time throughout the procedure (e.g., photosynthesis, a cooking recipe).",
"This ability is an important part of text understanding, allowing the reader to infer unstated facts such as how ingredients change during a recipe, what the inputs and outputs of a scientific process are, or who met whom in a news article about a political meeting.",
"Although several procedural text comprehension systems have emerged recently (e.g., EntNet (Hena et al., 2017), NPN (Bosselut et al., 2018), and ProStruct (Tandon et al., 2018)), they still make numerous prediction errors.",
"A major challenge is that fully annotated training data for this task is expensive to collect, because * Work done while at the Allen Institute for Artificial Intelligence.",
"To address this challenge, and thus improve performance, our goals are two-fold: first, to better leverage the training data for procedural text comprehension that is available, and second, to utilize additional unlabeled data for the task (semi-supervised learning).",
"Our approach in each case is to exploit label consistency , the property that two distinct texts covering the same procedure should be generally consistent in terms of the state changes that they describe, which constitute the labels to be predicted for the text.",
"For example, in di erent texts describing photosynthesis, we expect them to be generally consistent about what happens to oxygen (e.g., that it is created), even if the wordings di er (Figure 1).",
"Using multiple, distinct passages to understand a process or procedure is challenging.",
"Although the texts describe the same process, they might express the underlying facts at di erent levels of granularity, using di erent wordings, and including or omitting di erent details.",
"As a result, the details may di er between paragraphs, making them hard to align and to check for consistency.",
"Nonetheless, even if the details di er, we conjecture that the top-level summaries of each paragraph, which Figure 2: Three (simplified) passages from ProPara describing photosynthesis, the (gold) state changes each entity undergoes at each step s 1 , s 2 , ..., s T , and the summary of state changes that each entity undergoes (an aggregation of the step-by-step changes), where M = MOVED, D = DESTROYED, C = CREATED .",
"describe the types of state change that each entity undergoes, will be mostly consistent.",
"For example, although independent texts describing photosynthesis vary tremendously, we expect them to be consistent about what generally happens to sugar, e.g., that it is created (Figure 2).",
"In this paper, we introduce a new training framework, called L a CE (Label Consistency Explorer), that leverages label consistency among paragraph summaries.",
"In particular, it encourages label consistency during end-to-end training of a neural model, allowing consistency bias to improve the model itself, rather than be enforced in a post-processing step, e.g., posterior regularization (Ganchev et al., 2010).",
"We evaluate on a standard benchmark for procedural text comprehension, called ProPara (Dalvi et al., 2018).",
"We show that this approach achieves a new state-of-the-art performance in the fully supervised setting (when all paragraphs are annotated), and also demonstrate that it improves performance in the semi-supervised setting (us-ing additional, unlabeled paragraphs) with limited training data.",
"In the latter case, summary predictions from labeled data act as noisy gold labels for the unlabeled data, allowing additional learning to occur.",
"Our contributions are thus: 1. A new learning framework, L a CE, applied to procedural text comprehension that improves the label consistency among di erent paragraphs on the same topic.",
"2. Experimental results demonstrating that L a CE achieves state-of-the-art performance on a standard benchmark dataset, ProPara, for procedural text.",
"Leveraging Label Consistency Leveraging information about label consistency (i.e., similar instances should have consistent labels at a certain granularity) is an e ective idea.",
"It has been studied in computer vision (Haeusser et al., 2017; Chen et al., 2018) and IR (Clarke et al., 2001; Dumais et al., 2002).",
"Learning by association (Haeusser et al., 2017) establishes implicit cross-modal links between similar descriptions and leverage more unlabeled data during training.",
"Schtze et al. (2018); Hangya et al. (2018) adapt the similar idea to exploit unlabeled data for the cross-lingual classifica-tion.",
"We extend this line of research in two ways: by developing a framework allowing it to be applied to the task of structure prediction; and by incorporating label consistency into the model itself via end-to-end training, rather than enforcing consistency as a post-processing step.",
"Semi-supervised Learning Approaches Besides utilizing the label consistency knowledge, our learning framework is also able to use unlabeled paragraphs, which fits in the literature of semi-supervised learning approaches (for NLP).",
"Zhou et al. (2003) propose an iterative label propagation algorithm similar to spectral clustering.",
"Zhu et al. (2003) propose a semi-supervised learning framework via harmonic energy minimization for data graph.",
"Talukdar et al. (2008) propose a graph-based semi-supervised label propagation algorithm for acquiring open-domain labeled classes and their instances from a combination of unstructured and structured text sources.",
"Our framework extends these ideas by introducing the notion of groups (examples that are expected to be similar) and summaries (what similarities are expected), applied in an end-to-end-framework.",
"Procedural Text Understanding and Reading Comprehension There has been a growing interest in procedural text understanding / QA recently.",
"The ProcessBank dataset (Berant et al., 2014) asks questions about event ordering and event arguments for biology processes.",
"bAbI (Weston et al., 2015) includes questions about movement of entities, however it's synthetically generated and with a small lexicon.",
"Kiddon et al. (2015)'s RECIPES dataset introduces the task of predicting the locations of cooking ingredients, and Kiddon et al. (2016) for recipe generation.",
"In this paper, we continue this line of exploration using ProPara, and illustrate how the previous two lines of work (label consistency and semi-supervised learning) can be integrated.",
"A general condition for applying our method is having multiple examples where, for some properties, we expect to see similar values.",
"For example, for procedural text, we expect paragraphs about the same process to be similar in terms of which entities move, are created, and destroyed; for different news stories about a political meeting, we expect top-level features (e.g., where the meeting took place, who attended) to be similar; for di erent recipes for the same item, we expect loosely similar ingredients and steps; and for di erent images of the same person, we expect some high-level characteristics (e.g., height, face shape) to be similar.",
"Note that this condition does not apply to every learning situation; it only applies when training examples can be grouped, where all group members are expected to share some characteristics that we can identify (besides the label used to form the groups in the first place).",
"More formally, for training, the input is a set of labeled examples ( x gi , y gi ) (where y gi are the labels for x gi ), partitioned into G groups, where the g subscript denotes which group each example belongs to.",
"Groups are defined such that examples of the same group g are expected to have similar labels for a subset of labels y gi .",
"We call this subset the summary labels .",
"We assume that both the groupings and the identity of the summary labels are provided.",
"The output of training is a model M for labeling new examples.",
"For testing, the input is the model M and a set of unlabeled (and ungrouped) examples x t , and the output are their predicted labels y t .",
"Note that this formulation is agnostic to the learning algorithm used.",
"Later, we will consider both the fully supervised setting (all training examples are labeled) and semi-supervised setting (only a subset are labeled).",
"We instantiate this framework for procedural text comprehension, using the ProPara task (Dalvi et al., 2018).",
"In this task, x gi are paragraphs of text describing a process (e.g., photosynthesis), the labels y gi describe the state changes that each entity in the paragraph undergoes at each step (sentence) (e.g., that oxygen is created in step 2), and the groups are paragraphs about the same topic (ProPara tags each paragraph with a topic, e.g., there are three paragraphs in ProPara describing photosynthesis).",
"More precisely, each x gi consists of: the name (topic) of a process, e.g., photosynthesis a sequence (paragraph) of sentences S = [ s 1 , ..., s T ] that describes that process the set of entities E mentioned in that text, e.g., oxygen, sugar and the targets (labels) to predict are: Figure 3: Example of batches constructed from a group (here, the group contains three labeled examples x 1 , x 2 , x 3 ).",
"the state changes that each entity in E undergoes at each step (sentence) of the process, where a state change is one of { Moved,Created,Destroyed,None }.",
"These state changes can be conveniently expressed using a | S | | E | matrix (Figure 2).",
"State changes also include arguments, e.g., the source and destination of a move.",
"We omit these in this paper to simplify the description.",
"Finally, we define the summary labels as the set of state changes that each entity undergoes at some point in the process, without concern for when.",
"For example, in Passage 1 in Figure 2, CO 2 is Moved (M) and Destroyed (D), while sugar is Created (C).",
"These summary labels can be computed from the state-change matrix by aggregating the state changes for each entity over all steps.",
"Our assumption here is that these summaries will generally be the same (i.e., consistent) for di erent paragraphs about the same topic.",
"L a CE then exploits this assumption by encouraging this inter-paragraph consistency during training, as we now describe.",
"While a traditional supervised learning model operates on individual examples, L a CE operates on batches of grouped examples X g .",
"Given a group g containing N labeled examples { x 1 , ..., x N } (we drop the g subscript for clarity), L a CE creates N batches, each containing all the examples but with a di erent x i labeled as primary, along with the gold labels y i for (only) the primary example.",
"(We informally refer to the primary example as the first example in each batch).",
"Then for each batch, L a CE jointly optimizes the usual supervised loss L sup ( y i , y i ) for the primary example, along with a consistency loss between (summary) predictions for all other members of the group and the primary example, L con ( y j , y i ) for all j (cid:44) i .",
"This is illustrated in Figures 4 and 3. This is repeated for all batches.",
"For example, for the three paragraphs about photosynthesis (Figure 2), batch 1 compares the first paragraph's predictions with its gold labels, and also compares the summary predictions of paragraphs 2 and 3 with those of the first paragraph (Figure 3).",
"This is then repeated using paragraph 2, then paragraph 3 as primary.",
"The result is that L a CE jointly optimizes the supervised loss L sup and consistency loss L con to train a model that is both accurate for the given task as well as consistent in its predictions across examples that belong to the same group.",
"This process is approximately equivalent to jointly optimizing the usual supervised loss L sup ( y i , y i ) for all examples in the group, and the pairwise consistency loss L con ( y j , y i ) for all pairs ( x j , x i ) , j (cid:44) i in the group.",
"However, there is an important di erence, namely the relative contributions of L sup and L con is varied among batches, depending on how accurate the predictions for the primary example are (i.e., how small L sup is), as we describe later in Section 4.3.",
"This has the e ect of paying more attention to consistency loss when predictions on the primary are more accurate.",
"We also extend L a CE to the semi-supervised setting as follows.",
"For the semi-supervised setting, where only m of n ( m < n ) examples are labeled, we only form m batches, where each batch has \" M o d e l # $ Predicted state changes Gold state changes Label loss: '() back-propagate combined loss A batch for group + , Consistency loss: -./ \"",
"We now describe how L a CE is applied to our goal of comprehending procedural text.",
"Note that L a CE is agnostic to the learner used within the framework.",
"For this application, we use a simpli-fied version of ProStruct (Tandon et al., 2018), a publicly available system designed for the ProPara task.",
"Our implementation simplifies ProStruct by reusing its encoder, but then predicting (a distribution over) each state change label independently during decoding for every cell in the | S | | E | grid (Figure 2).",
"We briefly summarize this here.",
"ProStruct uses an encoder-decoder architecture that takes procedural text as input and predicts the state changes of entities E in the text as output.",
"During encoding, each step s t is encoded using | E | embeddings, one for each entity e j E .",
"Each embedding represents the action that s t describes, applied to e k .",
"The model thus allows the same action to have di erent e ects on di erent entities (e.g., a transformation destroys one entity, and creates another).",
"For each ( s t , e j ) S E pair, the step is fed into a BiLSTM (Hochreiter and Schmidhuber, 1997), using pretrained GloVe (Pennington et al., 2014) vectors v w for each word w i concatenated with two indicator variables, one indicating whether w i is a word referring to e j , and one indicating whether w i is a verb.",
"A bilinear attention layer then computes attention over the contextualized vectors h i output by the BiLSTM: a i = h i B h ev + b , where B and b are learned parameters, and h ev is the concatenation of h e (the averaged contextualized embedding for the entity words w e ) and h v (the averaged contextualized embedding for the verb words w v ).",
"Finally, the output vector c tj is the attention-weighted sum of the h i : c tj = (cid:80) Ii = 1 a i h i .",
"Here, c tj can be thought of as representing the action s t applied to entity e j .",
"This is repeated for all steps and entities.",
"To decode the action vectors c tj into their resulting state changes they imply, each is passed through a feedforward layer to generate logit ( tj ), a set of logistic activations over the K possible state changes tj for entity e j in step s t .",
"For ProPara, there are K = 4 possible state changes: Move, Create, Destroy, and None .",
"These logits form a distribution over possible state changes to predict, for each entity and step in the text.",
"We then compute loss, described next, using these distributions directly rather than discretizing them into exact predictions at this stage, so as not to lose information.",
"We start by creating training batches for each X g i X g .",
"From a group X g i comprising of n examples, we create n training batches.",
"A batch consists of all n examples ( x 1 , x 2 , ..., x n ), but the loss computation is di erent in each batch.",
"Figure 3 illustrates this.",
"The loss computation in a batch is based on the usual supervised loss and additionally the consistency loss, as follows:",
"Here, L sup ( y 1 , y 1 ) is the negative log likelihood loss * against the gold labels y 1 , and is a hyperpa-rameter tuned on the dev set.",
"To compute the consistency loss L con ( y i , y 1 ), we compare the summaries computed from y i and y 1 .",
"In our particular application, a summary lists all the state changes each entity undergoes, formed by aggregating its step-by-step state changes.",
"For example, for paragraph x 1 in Figure 4, as CO 2 first moves ( M ), then later is destroyed ( D ), we summarize its state changes as s ( CO 2 , y 1 ) = { M,D }.",
"In practice, as our decoder outputs distributions over the four possible values { M,C,D,N } rather than a single value, we summarize by adding and normalizing these distributions, producing a summary distribution s ( e , y j ) over the four values rather than a discrete set of values.",
"To compute the consistency loss L con ( y i , y 1 ) itself, we compare summaries for each entity e that occurs in both paragraph x 1 and paragraph x i (re-ferred to as Ent( x 1 ) and Ent( x i ) respectively), and compute the average mean squared error (MSE) between their summary distributions.",
"We also tried other alternatives (e.g., Kullback-Leibler divergence) for calculating the distance between summary distributions, but mean squared error per* Loss function L sup is exactly same as the loss function used in the base model so that we can measure the e ect of adding consistency loss.",
"forms best.",
"Equation 2 shows the details for computing the consistency loss.",
"Note that each paragraph contains varying number of entities and sentences.",
"It is possible that some paragraphs do not mention exactly the same entities as the labeled paragraph (first element in the batch).",
"In such cases, we penalize the model only for predictions for co-occurring entities.",
"Unmatched entities are not penalized.",
"The supervised loss L sup ( y 1 , y 1 ) is large in the early epochs when the model is not su ciently trained.",
"At this point, it is beneficial for the model to pay no attention to the consistency loss L con ( y j , y 1 ) as the predicted action distributions are inaccurate.",
"To implement this, if L sup is above a defined threshold then the consistency loss term in Equation 1 is ignored (i.e. = 1).",
"Otherwise, Equation 1 is used as is.",
"This can loosely be seen as a form of simulated annealing (Kirkpatrick et al., 1988), using just two temperatures.",
"Note that the time (epoch number) when the temperature (lambda) changes will vary across batches depending on the supervised loss within that batch of data, hence we call it an adaptive loss.",
"We now present results on ProPara, the procedural text comprehension dataset introduced in (Dalvi et al., 2018).",
"There are 187 topics in this dataset and a total of 488 labeled paragraphs (around 3 labeled paragraphs per topic).",
"The task is to track how entities change state through the paragraph (as described in Section 3.2) and answer 4 classes of questions about those changes (7043 / 913 / 1095 questions in each of the train / dev / test partitions re-spectively).",
"We compare L a CE with the baselines and prior state-of-the-art model ProStruct (Tandon et al., 2018) in two settings: (1) Fully supervised learning (using all the training data).",
"(2) Semi-supervised learning (using some or all of the training data, plus additional unlabeled data).",
"We evaluated L a CE by comparing its performance against published, state-of-the-art results on ProPara, using the full training set to train L a CE.",
"The results are shown in Table 1. In Table 1, all the baseline numbers are the results reported in (Tan-don et al., 2018).",
"Note that all these baselines are trying to reduce the gap between predicted labels and gold labels on the training dataset.",
"L a CE, however, also optimizes for consistency across labels for groups of paragraphs belonging to the same topic.",
"As L a CE uses parts of ProStruct as its learning algorithm, the gains over ProStruct appear to be coming directly from its novel learning framework described in Section 4.1.",
"To confirm this, we also performed an ablation study, removing the consistency loss term and just using the base model in L a CE.",
"The results are shown in Table 2, and show that the F1 score drops from 56.6 to 53.2, illustrating that the consistency loss is responsible for the improvement.",
"In addition, Table 2 indicates that consistency loss helps improve both precision and recall.",
"Also note that L a CE simplifies parts of ProStruct.",
"For example, unlike ProStruct, L a CE does not use a pre-computed knowledge base during decoding.",
"Thus L a CE is more e cient to train than ProStruct ( > 15x faster at training time).",
"Unlike the other systems in Table 1, L a CE is able to use unlabeled data during training.",
"As described in Section 4.1, given a group containing both labeled and unlabeled paragraphs, we create as many batches as the number of labeled paragraphs in the group.",
"Hence, paragraphs x i with gold labels y i can contribute to both supervised loss L sup and consistency loss L con .",
"Additionally, we can use unlabeled Models Proportion of labeled paragraphs used per training topic 33% 66% 100% P ro S truct 45.4 50.6 54.5 L a CE 47.3 51.2 56.6 L a CE + unlabeled data 49.9 52.9 56.7 Table 3: Comparing L a CE vs. P ro S truct with varying amount of labeled paragraphs available per training topic.",
"paragraphs x j (i.e., without gold labels y j ), while computing consistency loss L con .",
"This way L a CE can make use of unlabeled data during training.",
"To evaluate this, we collected 877 additional unlabeled paragraphs for ProPara topics .",
"As the original ProPara dataset makes some simplifying assumptions, in particular that events are mentioned in chronological order, we used Mechanical Turk to collect additional paragraphs that conformed to those assumptions (rather than collecting paragraphs from Wikipedia, say).",
"Approximately 3 extra paragraphs were collected for each topic in ProPara.",
"Note that collecting unlabeled paragraphs is substantially less expensive than labeling paragraphs.",
"We then trained the P ro S truct and L a CE models varying two di erent parameters: (1) the percentage of the labeled (ProPara) training data used to train the system (2) for L a CE only, whether the additional unlabeled data was also used.",
"This allows us to see performance under di erent conditions of sparsity of labeled data, and (for L a CE) also assess how much unlabeled data can help under those conditions.",
"During training, the unused labeled data was ignored (not used as unlabeled data).",
"We keep the dev and test partitions the same as original dataset, picking a model based on dev performance and report results on test partition.",
"The results are shown in Table 3. In the first two rows, P ro S truct and L a CE are both trained with x% of labeled data, while the last row reports perThe unlabeled paragraphs are available at http://data.",
"allenai.org/propara/ .",
"Table 3 demonstrates that L a CE results in even larger improvements over P ro S truct when the amount of labeled data is limited.",
"In addition, unlabeled data adds an additional boost to this performance, in particular when labeled data is sparse.",
"Further examination suggests that the gains in F1 are resulting mainly from improved recall, as shown in Figure 5. We believe that having access to unlabeled paragraphs and optimizing consistency across paragraphs for training topics, helps L a CE generalize better to unseen topics.",
"We implement our proposed model L a CE in Py-Torch (Paszke et al., 2017) using the AllenNLP (Gardner et al., 2018) toolkit.",
"We added a new data iterator that creates multiple batches per topic (Figure",
"3) which enables easy computation of consistency loss.",
"We use 100D Glove embeddings (Pennington et al., 2014), trained on Wikipedia 2014 and Gigaword 5 corpora (6B tokens, 400K vocab, uncased).",
"Starting from glove embeddings appended by entity and verb indicators, we use bidirectional LSTM layer to create contextual representation for every word in a sentence.",
"We use 100D hidden representations for the bidirectional LSTM (Hochreiter and Schmidhuber, 1997) shared between all inputs (each direction uses 50D hidden vectors).",
"We use attention layer on top of BiLSTM, using a bilinear similarity function similar to (Chen et al., 2016) to compute attention weights over the contextual embedding for each word in the sentence.",
"To compute the likelihood of all state changes Consistency Score (%) Train Test ProStruct 46.70 37.21 L a CE 54.39 38.36 Table 5: Consistency score comparison individually, we use a single layer feedforward network with input dimension of 100 and output 4. In these experiments, we check if the supervised loss L sup is less than a threshold (0.2 in our case) then we use equation 1 and lambda = 0 .",
"05.",
"All hyper-parameters are tuned on the dev data.",
"During training we use multiple paragraphs for a topic to optimize for both supervised and consistency loss.",
"At test time, L a CE's predictions are based on only one given paragraph.",
"All the performance gains are due to the base model being more robust due to proposed training procedure.",
"The code for L a CE model is published at https://github.com/allenai/propara .",
"We first discuss the predicted label consistency across paragraphs for L a CE vs. P ro S truct .",
"We then identify some of the limitations of L a CE.",
"L a CE attempts to encourage consistency between paragraphs about the same topic during training, and yield similar benefit at test time.",
"To examine whether this happens in practice, we compute and report the consistency score between paragraphs about the same topic (Table 5).",
"Specifically, for an entity that appears in two paragraphs about the same topic, we compare whether the summaries of state change predictions for each match.",
"The results are shown in Table 5. The table shows that L a CE achieves greater prediction consistency during training, and that this benefit plays out to some extent at test time even though label consistency is not enforced at test time (we do not assume that examples are grouped at test time, hence consistency between groups cannot be enforced as the grouping is unknown).",
"As an illustration, for the topic describe the life cycle of a tree which is unseen at training time, for the three paragraphs on the topic, ProStruct predicts that tree is created; not-changed; and created respectively, while L a CE correctly predicts that tree is created; created; and created respectively.",
"To understand L a CE's behavior further, we examined cases where L a CE's and P ro S truct 's predictions di er, and examined their agreement with gold labels.",
"In this analysis we found three major sources of errors for L a CE: The label consistency assumption does not always hold: In Section 3.1, we explain that L a CE relies on summary labels being consistent across examples in the same group.",
"We found that for some of the topics in our training dataset this assumption is sometimes violated.",
"E.g., for the topic How does the body control its blood sugar level?",
", there are two di erent paragraphs; one of them describes the entity sugar as being Created and then Destroyed to create bloodsugar , while the other paragraph describes the same event in a di erent way by saying that the entity sugar is Created and then Moved to the blood.",
"L a CE can thus goes wrong when trying to enforce consistency in such cases.",
"Lexical variance between entities across paragraphs: Di erent paragraphs about the same topic may describe the procedure using di erent wordings, resulting in errors.",
"For example, in paragraphs about the topic what happens during photosynthesis?",
", the same entity ( carbon dioxide ) is referred to by two di erent strings, CO 2 in one paragraph and carbon dioxide in another.",
"Currently, L a CE does not take into account entity synonyms, so it is unable to encourage consistency here.",
"An interesting line of future work would be to use the embedding space similarity between entity names, to help address this problem.",
"improve consistency: For the topic Describe how to make a cake at training time, when presented with two paragraphs, L a CE tries to be consistent and incorrectly predicts that cake is Destroyed in both paragraphs.",
"ProStruct does not attempt to improve prediction consistency, here resulting in less consistent but in this case more accurate predictions for this topic.",
"5.5 Directions For Enhancing L a CE Improve L a CE for ProPara: L a CE's performance on ProPara can be improved further by",
"a) soft matching of entities across paragraphs instead of current exact string match",
"b) exploring more systematic ways (e.g., simulated annealing) to define adaptive loss",
"c) using additional sources of unlabeled data (e.g., web, textbooks) weighed by their reliability.",
"Apply L a CE on other tasks: Architecturally, L a CE is a way to train any existing structured prediction model for a given task to produce consistent labels across similar data-points.",
"Hence it can be easily applied to other tasks where parallel data is available (group-ing function) and there is a way to e ciently compare predictions (summary labels) across parallel datapoints, e.g. event extraction from parallel news articles (Chinchor, 2002).",
"Further, summary labels need not be action categories (e.g., Created , Destroyed ).",
"Consistency can also be computed for QA task where multiple parallel text is available for reading comprehension.",
"We plan to explore this direction in the future.",
"Our goal is procedural text comprehension, a task that current systems still struggle with.",
"Our approach has been to exploit the fact that, for many procedures, multiple independent descriptions exist, and that we expect some consistency between those descriptions.",
"To do this, we have presented a taskand model-general learning framework, L a CE, that can leverage this expectation, allowing consistency bias to be built into the learned model.",
"Applying this framework to procedural text, the resulting system obtains new state-of-the-art results on the ProPara dataset, an existing benchmark for procedural text comprehension.",
"It also demonstrates the ability to benefit from unlabeled paragraphs (semi-supervised learning), something that prior systems for this task were unable to do.",
"We have also identified several avenues for further improvement (Section 5.4), and are optimistic that further gains can be achieved.",
"Computations on beaker.org were supported in part by credits from Google Cloud."
] | [
"objective",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"method",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other"
] |
[
"Cross-domain Chinese Word Segmentation (CWS) remains a challenge despite recent progress in neural-based CWS.",
"The limited amount of annotated data in the target domain has been the key obstacle to a satisfactory performance.",
"In this paper, we propose a semi-supervised word-based approach to improving cross-domain CWS given a baseline segmenter.",
"Particularly, our model only deploys word embeddings trained on raw text in the target domain, discarding complex handcrafted features and domain-specific dictionaries.",
"Innovative subsampling and negative sampling methods are proposed to derive word embeddings optimized for CWS.",
"We conduct experiments on five datasets in special domains, covering domains in novels, medicine, and patent.",
"Results show that our model can obviously improve cross-domain CWS, especially in the segmentation of domain-specific noun entities.",
"The word F-measure increases by over 3.0% on four datasets, outperforming state-of-the-art semi-supervised and unsupervised cross-domain CWS approaches with a large margin.",
"We make our code and data available on Github.",
"Chinese Word Segmentation (CWS) is the first step for many Chinese Natural Language Processing (NLP) tasks (Cai and Zhao, 2016; Zhao et al., 2017).",
"Approaches to CWS could be categorized into two categories: character-based and word-based.",
"The former treats CWS as a sequence labeling problem, labeling each character in a sequence with B/I/E/S ( Beginning, Internal, End, Single ) labels (Tseng et al., 2005).",
"Traditional character-based approaches often use Conditional Yuxiao Ye and Yue Zhang contributed equally to this work.",
"Random Fields (CRF) models to label sequences, with complex hand-crafted discrete features (Peng et al., 2004; Tseng et al., 2005).",
"Unlike character based CWS, word-based CWS operates on a word-level, directly exploiting word-level features.",
"Typical CRF models are replaced with semi-CRF models, in which labels are assigned to subsequences instead of characters (Sarawagi and Cohen, 2005; Liu et al., 2014).",
"Transition-based approaches have also been used to exploit larger feature contexts (Zhang and Clark, 2007).",
"More recent approaches exploit neural networks including Recurrent Neural Networks (RNN) to replace hand-crafted discrete features with real-valued features (Cai and Zhao, 2016; Chen et al., 2015, 2017).",
"Existing studies have achieved satisfactory results for in-domain CWS, with F-scores over 96.0% in the newspaper domain (Chen et al., 2017).",
"Nevertheless, cross-domain CWS remains a big challenge (Liu et al., 2014; Liu and Zhang, 2012).",
"The main reason is the lack of annotated data in the target domain, which makes supervised approaches less useful.",
"To tackle this problem, some unsupervised and semi-supervised approaches have been proposed.",
"One way is to exploit complex features including character types, lexical features and accessor varieties (Wu et al., 2014), which requires much efforts on feature engineering.",
"Another way is to deploy machine learning algorithms including self-training and model ensemble (Gao and Stephan, 2010; Liu and Zhang, 2012; Qiu and Zhang, 2015), which is time-consuming and inefficient.",
"In this paper, we investigate a different approach to deploying unsupervised data for cross-domain CWS, in order to completely break free from the reliance on manual annotation, complex feature engineering, and even parametric training to some extent.",
"We propose a Word-Embedding-Based CWS (WEB-CWS) model, which aims to improve the performance of an existing baseline segmenter in cross-domain CWS.",
"WEB-CWS is a conceptually simple word-based model, using word embeddings, which are expected to carry semantic and syntax information (Mitchell and La-pata, 2010), as the only input of a non-parametric word segmentor.",
"The basic intuition is that embeddings of words within a same context window should be close to each other (Goldberg and Levy, 2014; Mikolov et al., 2013).",
"If a sequence is incorrectly segmented, those incorrectly segmented words are likely to be semantically and syntactically inconsistent with their surrounding words.",
"Consequently, the embedding of an incorrectly segmented word should be far away from embeddings of its surrounding words.",
"Based on the hypothesis above, we propose WEB-CWS.",
"Word embeddings are first derived with a CWS-oriented word embedding model with innovative subsampling and negative sampling methods.",
"A word-embedding-based decoder is then used for segmentation, with cosine similarities among word embeddings as the metric for probability calculation.",
"WEB-CWS is a semi-supervised model, because it only uses word embeddings trained on raw text in the target domain, which is first automatically segmented by the baseline segmenter.",
"The model is also cross-domain in the sense that it can improve the performance of the baseline segmenter, when the source text for training the baseline segmenter and the target text to be segmented are in different domains.",
"The main contributions of this paper include: To our knowledge, we are the first to directly use word embeddings for CWS, without any neural structures, which makes our model conceptually simpler and run faster.",
"We have proposed novel sampling methods to make the embeddings optimized for CWS, which has never been used for embedding training.",
"Our model can be used on top of any existing CWS models to improve their performances, without the need to re-train those models with annotated domain specific data.",
"Our work is related with existing research on word-based CWS, cross-domain CWS, and embedding-based CWS.",
"Instead of labeling a sequence character-wise, word-based CWS tries to pick the most probable segmentation of a sequence.",
"Zhang and Clark (2007) design a statistical method for word-based CWS, extracting word-level features directly from segmented text.",
"The perceptron algorithm (Collins, 2002) is used for training and beam-search is used for decoding.",
"Cai and Zhao (2016) use Gated Combination Neural Networks and LSTM to present both character sequences and partially segmented word sequences, combining word scores and link scores for segmentation.",
"Our work is in line with their work in directly using word information for CWS.",
"In contrast, our method is conceptually simpler by directly using word embeddings.",
"In addition, our work aims at domain-adaptation, rather than training from scratch.",
"Supervised, semi-supervised and unsupervised approaches have been proposed for domain adaptation for CWS.",
"Chen et al. (2017) use an Adversarial Network to learn shared knowledge for different segmentation criteria and domains.",
"This approach requires annotated data in the target domain.",
"However, one challenge for cross-domain CWS is the lack of such annotated data.",
"Liu and Zhang (2012) propose an unsupervised model, in which they use features derived from character clustering, together with a self-training algorithm to jointly model CWS and POS-tagging.",
"This approach is highly time-consuming (Qiu and Zhang, 2015).",
"Another challenge is the segmentation of domain-specific noun entities.",
"In a task of segmenting Chinese novels, Qiu and Zhang (2015) design a double-propagation algorithm with complex feature templates to iteratively extract noun entities and their context, to improve segmentation 1 https://github.com/vatile/CWS-NAACL2019 performance.",
"This approach still relies heavily on feature templates.",
"Similarly, our model does not require any annotated target data.",
"In contrast to their work, our model is efficient and feature-free.",
"There are CWS models deploying embeddings.",
"Ma and Hinrichs (2015) and Deng and Sun (2018) propose embedding matching CWS models, in which embeddings of characters in a sequence are compared with high dimensional representations of CWS-specific actions (e.g., separation and combination) or CWS-specific labels (e.g., B/M/E/S ).",
"Then each character is labeled according to the similarity between its embedding and the high dimensional representation.",
"Particularly, Zhou et al. (2017) propose to use character embeddings trained on a word-based context to improve the performance of existing neural CWS models, which is similar to our approach in terms of making use of CWS-oriented word embeddings derived with automatically segmented raw corpus.",
"However, in their work, when doing cross-domain CWS, word embeddings are fed into the baseline neural model trained on a large annotated general corpus, with annotated special domain data as the development set.",
"In our model, on the contrary, word embeddings are used directly for CWS with a non-parametric decoder, which does not require to re-construct the baseline model, and annotation is not required at all for the special domain data.",
"The overall architecture of our method is shown in Figure",
"1. Given a baseline segmenter and a target domain raw corpus T , we obtain an automatically segmented corpus T (cid:48) by applying the baseline segmenter to the target corpus.",
"We then execute our CWS-oriented word embedding model (Section 3.1) on T (cid:48) to derive a set of word embeddings E .",
"In addition, all tokens from T (cid:48) are collected as a target domain dictionary D .",
"Finally, E and D are used to re-segment T with our word-embedding-based segmenter (Section 3.2).",
"We use a CWS-oriented model modified from the Skip-gram model (Mikolov et al., 2013) to derive word embeddings.",
"A typical Skip-gram model using negative sampling tries to maximize the fol-Raw corpus: T Baselinesegmenter Segmented corpus: T ' Embeddingmodel Word embeddings: EWEBsegmenter Re-segmentedcorpus Figure 1: The pipeline of WEB-CWS.",
"(cid:88) ( w,c ) P log ( v w v (cid:62) c ) + (cid:88) ( w,c (cid:48) ) N log ( v w v (cid:62) c (1)",
"with P being the set of positive samples ( w, c ) consisting of a target word w and a context word c , N being the set of negative samples ( w, c (cid:48) ) consisting of a target word w and a word c (cid:48) drawn randomly from a noise distribution P n ( w ) , v w being the word embedding of w , and being the sigmoid activation function.",
"Subsampling is applied when choosing the target word w to reduce training time and to improve the quality of embeddings of rare words (Mikolov et al., 2013).",
"For a natural language, the frequency distribution of all words is expected to obey Zipf's law: a word's frequency is inversely proportional to its rank in the frequency table (Newman, 2005).",
"This highly biased distribution makes the training of the Skip-gram model inefficient, in that very frequent words can make a large portion of training samples, but their embeddings may not change much after being seen for a certain time (Mikolov et al., 2013).",
"Therefore, a subsampling method is used by Mikolov et al. (2013), with the probability for a word w being sampled as: p sub ( w ) = min (1 , (cid:114) (cid:15) f ( w )) (2) where (cid:15) is an arbitrarily chosen threshold, and f ( w ) is the frequency of w .",
"Since word embeddings in our model are used for segmentation, we cannot directly use the training objective in Mikolov et al. (2013), which is designed for language modeling.",
"To make the training objective more consistent with the goal of CWS, we modify the negative sampling Skip-gram model in various ways, including adding CWS-oriented negative samples, changing the method for subsampling multi-character words, normalizing the dot product of embeddings, and smoothing the weights of positive and negative samples in training.",
"When training a typical Skip-gram model, a target word and a word within its context window are taken together as a positive sample (Mikolov et al., 2013).",
"From the perspective of CWS, it can be perceived as teaching the model how to correctly segment a sequence.",
"From this perspective, we develop a method to generate negative samples from a word's context (i.e., words within the context window), in order to tell the model what the incorrect segmentations of a sequence are.",
"Given a target word w and its context C , and SL / SR as the sequence of characters on the left/right of w within C , the proposed context negative sampling method generates negative samples in the following way: for any substring s (cid:48) of SL and SR , if s (cid:48) is in the dictionary D but not in C , ( w , s (cid:48) ) will be generated as a negative sample.",
"Another way to generate negative samples concerning CWS is to split multi-character words and combine its substrings as negative samples.",
"For instance, given a multi-character target word w = c 1 c 2 c 3 , supposing that all its substrings are in D , the proposed in-word negative sampling method will then generate the following negative samples: ( c 1 , c 2 ) , ( c 1 , c 3 ) , ( c 2 , c 3 ) , ( c 1 c 2 , c 3 ) and ( c 1 , c 2 c 3 ) .",
"By doing so, our model is expected to learn not to split those multi-character words when segmenting.",
"In the Chinese language, there are some frequent multi-character words consisting of substrings which are also very frequent words themselves.",
"For example, the Chinese word d`ansh` (but)' can be decomposed into two words d`an (but)' and sh` (be)'.",
"Due to the nature of the Skip-gram model, embeddings of frequent words are relatively close to each other since they co-occur with other words more frequently.",
"This nature makes our model inclined to split such multi-character words when segmenting.",
"Although the in-word negative sampling method proposed above is expected to prevent our model from incorrectly splitting multi-character words, we still want our model to pay more attention to the segmentation of such words.",
"As a result, for subsampling, we will not discard a multi-character word w if: p sub ( w ) < N (cid:88) w sub S D p sub ( w sub ) (3) where S is the set of substrings of w , and N is the size of all w sub S D , which is smoothed by a threshold (which is empirically set to 0.5 in our model).",
"By doing so, we can keep those multi-character words whose substrings are more frequent words themselves.",
"Our model is thus expected to learn better how to segment such words through samples generated with in-word negative sampling.",
"In the original Skip-gram model, the dot product of embeddings of two words is directly used as the input for the sigmoid layer (Mikolov et al., 2013).",
"To make word embeddings derived from the CWS-oriented word embedding model more consistent with the metric used for segmentation as described in Section 3.2.2, we modify the training objective described in Equation (1) as follows: (cid:88) ( w,c ) P log ( || v w v (cid:62) c || ) + (cid:88) ( w,c (cid:48) ) N log ( || v w v (cid:62) c (cid:48) || ) (4) 3.1.5 Smoothing Class Weights For any target word, when training the CWS-oriented word embedding model, only one positive sample but many negative samples are generated.",
"To balance the influence of positive and negative samples, a different weight is assigned to each class as follows: class weight = (cid:40) 1 .",
"where N pos and N pos are the amount of positive and negative samples respectively, and is a smooth factor.",
"The smooth factor can prevent the weight of negative samples being too low when negative samples are much more than positive samples.",
"In WEB-CWS, we formalize the process of segmenting a sequence as a problem of hypotheses-based Viterbi decoding (Forney, 1973): given a sequence, generating segmentation hypotheses character-wise from the first to the last character, and then searching for the optimal path according to the predefined metric of probability.",
"Given a sentence consisting of n characters S = < c 0 > c 1 c 2 ...c n < c n +1 > ( c 0 and c n +1 are markers of the beginning/end of a sentence), we generate segmentation hypotheses character-wise from c 0 to c n +1 .",
"At each time step t , a hypothesis h t is defined as: h t = SEG t : [ w 0 { c 0 } w 1 { c 1 ...c g } ...w m { c j ...c k } ] BUF t : [ c k +1 ...c t ] M t = m + 1 (6) which includes a partial segmentation container SEG t , a buffer container BUF t , and the number of segmented words M t .",
"In h t , characters c 0 c 1 ...c k are segmented into words w 0 w 1 ...w m stored in SEG t ; characters c k +1",
"..c t remain unsegmented and are stored in BUF t .",
"For the initial hypothesis h 0 and the final hypothesis h n +1 , the buffer container will be empty.",
"Given a character c t +1 and a hypothesis h t , h t +1 can be generated in two ways, by either appending c t +1 to BUF t ( t (cid:54) = n ), or first moving the sequence in BUF t into SEG t as a new word, and then appending c t +1 to BUF t ( t (cid:54) = n ).",
"In the former case: h t +1 = SEG t : [ w 0 { c 0 } ...w m { c j ...c k } ] BUF t : [ c k +1 ...c t c t +1 ] M t = m + 1 (7) In the latter case: h t +1 = SEG t = w 0 { c 0 } ...w m { c j ...c k } w m +1 { c k +1 ...c t } BUF t = c t +1 M t = m + 2 (8) Particularly, when generating the hypothesis for c n +1 (the end of sentence marker), the sequence in BUF n has to be moved into SEG n , and c n +1 also needs to be moved into SEG n as a new word.",
"If a word w is not in the dictionary D , we cannot get its embedding.",
"As a result, any hypothesis containing w in the segmentation container will be discarded.",
"Moreover, to reduce search space, once the size of the sequence in a buffer container reaches a threshold m (i.e., the maximum word length), this sequence will be moved into the segmentation container when generating hypotheses at the next time step.",
"log p ( h t ) = (cid:40) 0 , t = 0 1 M t 1 log (cid:81) M t 1 i =1 p ( w i | SEG t , f ) , t (cid:54) = 0 (9) where f is the window size (e.g., if f = 2 , p ( w i | SEG t , f ) will be decided by w i 1 and w i 2 ).",
"In WEB-CWS, we use cosine similarity between embeddings of two words as the metric of probability.",
"Given a hypothesis h t , p ( w i | SEG t , f ) is calculated as follows: p ( w i | SEG t ,f ) = e 0 , i = 0 e 1 min ( f,i ) min ( f,i ) (cid:80) j =1 cos( v wi ,v wi j ) , i (cid:54) = 0 (10) where cos( v w i , v w i j ) refers to the cosine similarity between embeddings of w i and w i j .",
"Theoretically, a sequence of length n can have at most 2 n 1 possible segmentations.",
"By discarding hypotheses containing out-of-vocabulary (OOV) words and setting the maximum word length, the search space can be significantly reduced.",
"The very limited search space makes dynamically deciding the beam-size and the maximum word length possible.",
"Given an initial beam-size k , at each time step t , the segmenter will only keep at most top k hypothesis sorted by log probabilities in a descending order.",
"Some hypotheses will also be discarded due to OOV words and the maximum word length limit.",
"As a result, it is sometimes possible for a sequence to have no hypothesis at all after some time steps.",
"Once it happens, the segmenter will increase the beam-size by 10 and the maximum word length by 1, and then re-generate hypothesis from the beginning, till at least one hypothesis is generated at the final time step.",
"Dynamic beam-size and maximum word length ensure that for each sequence, at least one segmentation (the one given by the baseline segmenter) will be generated as the final segmentation result.",
"This mechanism can guarantee the efficiency and reliability of the segmenter at the same time.",
"To improve the decoding speed, we pre-calculate the cosine similarity of all word pairs co-occurring at least once in the automatically segmented corpus, and store them in a file (which only consists of millions of word pairs for a Chinese novel).",
"In doing so, when decoding, for most word pairs, we only need to look up to this file for similarity scores, which can significantly improve the decoding speed.",
"According to later experiments, this look-up strategy can cover about 92% of the similarity calculation needed for decoding.",
"We conduct experiments on various datasets in different domains to thoroughly evaluate the performance of our model.",
"We evaluate our model in terms of cross-domain CWS on five datasets, including three Chinese novel datasets (Qiu and Zhang, 2015): DL ( DouLuoDaLu ), FR ( FanRenXiuXianZhuan ) and ZX ( ZhuXian ), and two CWS datasets in special domains (Qiu et al., 2015): DM (dermatology) and PT (patent).",
"We use the standard split for all datasets as they are published.",
"Raw test data is also included for deriving word embeddings.",
"Statistics of these datasets are shown in Table",
"1. Since there are no gold segmentation of full novels for three Chinese novel datasets, their statistics are Dataset Sentence (K) Token (K) Character (K) Full Eval Full Eval Full Eval DL 40 1 1,982 32 2,867 47 FR 148 1 5,004 17 7,126 25 ZX 59 1 2,131 21 3,006 31 DM 32 1 709 17 1,150 30 PT 17 1 556 34 903 57 Table 1: Statistics of full and evaluation datasets.",
"based on the segmentation given by the baseline segmenter.",
"In some studies, pre-processing is applied in order to improve the performance of CWS models, including substituting consecutive digits and English letters, Chinese idioms and long words with unique symbols (Cai and Zhao, 2016; Cai et al., 2017; Chen et al., 2015).",
"However, we do not deploy such techniques for fair comparison, focusing only on the possible improvements brought by word embeddings.",
"The only pre-processing adopted in our model is to first split a sentence with a set of pre-defined delimiters: characters that are not Chinese characters, English letters or digits.",
"Those fragments of a sentence are then fed into the segmenter, and a complete segmented sentence is returned by reassembling the segmented fragments and delimiters in the original order.",
"Hyperparameters used in our WEB-CWS model are explained and their values are displayed in Table",
"2. All hyperparameters are tuned on a small excerpt of ZX, which consists of 300 sentences (Qiu and Zhang, 2015).",
"It is worth noting that, according to Mikolov et al. (2013), for each positive sample, the optimal number of negative samples drawn from a noise distribution is usually between 5 to 20.",
"However, in our model, we find that, for each target word, drawing one negative sample from a noise distribution is good enough, which may be caused by the large amount of negative samples generated by context and in-word negative sampling.",
"Also, Mikolov et al. (2013) report that the unigram distribution raised to the 3/4ths power is better than the uniform distribution for negative sampling.",
"But in WEB-CWS, using the uniform distribution leads to better segmentation results.",
"For consistency, all segmentation results are automatically calculated with the script provided in the SIGHAN Bakeoff (Emerson, 2005) and are reported as word F-measures.",
"Two state-of-the-art CWS models trained on a People's Daily corpus in 2000 January are tested.",
"One is a joint word segmentation and POS-tagging model (Zhang and Clark, 2010), and the other is a word-based neural CWS model (Cai et al., 2017).",
"When training both models, default settings are used, except that the maximum word length in Cai et",
"al.'s model is set to 5, which is in line with the setting of WEB-CWS.",
"On the evaluation set of PKU (Emerson, 2005), both models yield comparable results, but on the evaluation set of DL, Zhang and Clark's model (F-measure = 0.905) performs better than Cai et",
"al.'s model (F-measure = 0.849).",
"It is very possible that Zhang and Clark's model can handle cross-domain CWS more effectively.",
"As a result, we choose Zhang and Clark's model as the baseline segmenter for following experiments.",
"Results in Table 3 show that our WEB-CWS model can obviously improve CWS on four datasets in special domains, including DL, FR, ZX and DM, with an increase of over 3.0% in F-measure.",
"Those four datasets are all in domains (novel and dermatology) which are very different from that of the baseline segmenter (newspaper).",
"This result suggests that WEB-CWS can effectively improve cross-domain CWS.",
"are two possible reasons for this result.",
"First, the size of PT is the smallest among all datasets, which may make the quality of word embeddings unsatisfactory.",
"Second, the PT dataset contains a huge amount of decimal points (e.g., 3.14'), percentage signs (e.g., 28%'), hyphens (e.g., pMIV-Pnlp') and very long English strings (e.g., agct-gagtcg'), which are all cases that cannot be handled by WEB-CWS without corresponding preprocessing techniques.",
"We also compare WEB-CWS with two state-of-the-art semi-supervised and unsupervised cross-domain CWS models by Qiu and Zhang (2015) and Liu and Zhang (2012), both of which use the same baseline model proposed by Zhang and Clark (2010) as used in our model.",
"We adopt the method of combining character clustering and self-training in Liu and Zhang (2012) with datasets in our experiments.",
"Results of the model in Qiu and Zhang (2015) are directly copied from the corresponding paper.",
"Results in Table 3 show that WEB-CWS outperforms these two state-of-the-art models with a large margin in terms of F-measure.",
"Particularly, on the DM dataset, Liu and Zhang's model only achieves a relatively low F-score improvement rate (1.8%), which is likely to be caused by the large difference between the source and target domains.",
"This result suggests that WEB-CWS is more robust to domain dissimilarity compared with self-training.",
"We test the run time for our decoder on a 3.5 GHz Intel Core i7 CPU.",
"On all five test sets, the average decoding speed is 20.3 tokens per millisecond, when the initial beam-size is set to 10.",
"In the work of Zhou et al. (2017), the decoding speed of their model is 14.7 tokens per millisecond for greedy segmentation.",
"However, these results cannot be compared directly since they are produced on different machines.",
"Our similarity look-up strategy is proved to be efficient in improving the decoding speed.",
"In order to assess the effect of negative sampling and subsampling methods, we conduct a series",
"ablation experiments.",
"A detailed analysis is presented to understand in what way WEB-CWS can improve cross-domain CWS.",
"All experiments and analyses in this section are carried out on three datasets with most significant improvements in F-measure: DL, FR and DM.",
"In ablation experiments, we study the influence of two CWS-oriented negative sampling and multi-character words subsampling.",
"Results in Table 4 show that WEB-CWS using word embeddings derived with the basic Skip-gram model ( basic ') performs obviously worse than the baseline segmenter.",
"When CWS-oriented negative sampling is applied alone, either context ( c n ') or in-word ( w n ') negative sampling, the performance of WEB-CWS is obviously better than or similar to that of the baseline segmenter.",
"When both CWS-oriented negative sampling methods are applied together ( c w n '), WEB-CWS is ensured to obviously outperform the baseline segmenter.",
"Also, when multi-character subsampling ( m s ') is applied, the performance of WEB-CWS can further improve a little.",
"To see which words are incorrectly segmented by the baseline segmenter but correctly by WEB-CWS, all words occurring at least ten times in the three datasets are sorted in a descending order,",
"by improvements in terms of segmentation precision.",
"Table 5 displays the ten most improved words.",
"As shown in Table 5, among the ten most improved words, seven words are domain-specific noun entities, including person names, disease names and chemical compound names.",
"For some noun entities (e.g., glucocorticoid ), even if the baseline segmenter can rarely segment them correctly, WEB-CWS can still find the correct segmentation in most cases.",
"This result suggests that WEB-CWS is especially effective in segmenting domain-specific noun entities.",
"We have proposed WEB-CWS, a semi-supervised model that can be used to effectively improve cross-domain CWS.",
"Our model only requires a baseline segmenter and a raw corpus in the target domain, deploying only word embeddings for CWS.",
"WEB-CWS obviously improves the performance of the state-of-the-art baseline segmenter on four datasets in special domains, especially in segmenting domain-specific noun entities.",
"This paper was partially supported by the National Natural Science Foundation of China (No.",
"61572245).",
"Thanks to You Wang et al. and the anonymous reviewers for their constructive and insightful comments on this paper.",
"References Deng Cai and Hai Zhao.",
"2016.",
"Neural word segmentation learning for chinese.",
"arXiv preprint arXiv:1606.04300 .",
"Deng Cai, Hai Zhao, Zhisong Zhang, Yuan Xin, Yongjian Wu, and Feiyue Huang.",
"2017.",
"Fast and accurate neural word segmentation for chinese.",
"arXiv preprint arXiv:1704.07047 .",
"Xinchi Chen, Zhan Shi, Xipeng Qiu, and Xuanjing Huang.",
"Adversarial multi-criteria learning for chinese word segmentation.",
"arXiv preprint arXiv:1704.07556 .",
"Xiaolong Deng and Yingfei Sun.",
"2018.",
"An improved embedding matching model for chinese word segmentation.",
"In 2018 International Conference on Ar-tificial Intelligence and Big Data (ICAIBD) .",
"IEEE.",
"2005.",
"The second international chinese word segmentation bakeoff.",
"In Proceedings of the fourth SIGHAN workshop on Chinese language Processing .",
"1973.",
"The viterbi algorithm.",
"Proceedings of the IEEE , 61(3):268278.",
"Qin Gao and Vogel Stephan.",
"2010.",
"A multi-layer chinese word segmentation system optimized for out-of-domain tasks.",
"In CIPS-SIGHAN Joint Conference on Chinese Language Processing .",
"Yoav Goldberg and Omer Levy.",
"2014.",
"Word2vec explained: Deriving mikolov et",
"al.'s negative-sampling word-embedding method.",
"arXiv preprint arXiv:1402.3722 .",
"Yang Liu and Yue Zhang.",
"2012.",
"Unsupervised domain adaptation for joint segmentation and pos-tagging.",
"Proceedings of COLING 2012: Posters , pages 745 754.",
"Jeff Mitchell and Mirella Lapata.",
"2010.",
"Composition in distributional models of semantics.",
"Cognitive science , 34(8):13881429.",
"Mark EJ Newman.",
"2005.",
"Power laws, pareto distributions and zipf's law.",
"Contemporary physics , 46(5):323351.",
"Likun Qiu and Yue Zhang.",
"2015.",
"Word segmentation for chinese novels.",
"In AAAI , pages 24402446.",
"Sunita Sarawagi and William W Cohen.",
"2005.",
"Semi-markov conditional random fields for information extraction.",
"In Advances in neural information processing systems , pages 11851192.",
"Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning.",
"2005.",
"A conditional random field word segmenter for sighan bakeoff 2005.",
"In Proceedings of the fourth SIGHAN workshop on Chinese language Processing ."
] | [
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"method",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"While neural text-to-speech systems perform remarkably well in high-resource scenarios, they cannot be applied to the majority of the over 6,000 spoken languages in the world due to a lack of appropriate training data.",
"In this work, we use embeddings derived from articulatory vectors rather than embeddings derived from phoneme identities to learn phoneme representations that hold across languages.",
"In conjunction with language agnostic meta learning, this enables us to fine-tune a high-quality text-to-speech model on just 30 minutes of data in a previously unseen language spoken by a previously unseen speaker.",
"The advance of deep learning (Vaswani et al., 2017; Goodfellow et al., 2014) has enabled great improvements in the field of Text-to-Speech (TTS).",
"(Towards-)end-to-end models, such as Tacotron 2 (Wang et al., 2017; Shen et al., 2018), Trans-formerTTS (Li et al., 2019b), FastSpeech 2 (Ren et al., 2019, 2020), FastPitch (ancucki, 2021) and many more famous instances (e.g. Ark et al. (2017) and Prenger et al. (2019)) allow for speech synthesis with unprecedented quality and controllability.",
"The models mentioned here rely on vocoders, such as WaveNet (van den Oord et al., 2016), MelGAN (Kumar et al., 2019), Parallel Wave-GAN (Yamamoto et al., 2020) or HiFi-GAN (Kong et al., 2020) to turn the parametric representations that they produce into waveforms.",
"Recently proposed models even include some with the ability to go directly to the waveform from a grapheme or phoneme input sequence, such as EATS (Donahue et al., 2020) or VITS (Kim et al., 2021).",
"While these methods all perform remarkably well if given enough data, cross-lingual use of data remains a key challenge in TTS.",
"Most modern methods are limited to languages and domains that are rich in resources, which over 6,000 languages are not.",
"Attempts at reducing the required resources in a target language by making use of transfer learning from multilingual data have been made by Azizah et al. (2020); Xu et al. (2020); Chen et al. (2019).",
"The mismatch of input spaces however requires complex architectural changes, which limits their ability to be used in conjunction with other modern TTS architectures.",
"Attempts at fixing the issue of having to transfer knowledge from a source to a target by just jointly training on a mixed set of more and less resource rich languages have been made by He et al. (2021); de Korte et al. (2020); Yang and He (2020), which requires complex training procedures.",
"In this work, we will also attempt to transfer knowledge from a set of high resource languages to a low resource language.",
"We fix previous shortcomings by 1) using a linguistically motivated representation of the inputs to such a system (articulatory and phonological features of phonemes) that enables cross-lingual knowledge sharing and 2) applying the model agnostic meta learning (MAML) framework (Finn et al., 2017) to the field of low-resource TTS for the first time.",
"Using articulatory features as inputs for neural TTS has been attempted recently by Staib et al. (2020) and Wells et al. (2021), following the classical approach of Jakobson et al. (1961).",
"Both achieved good results when applying this idea to the codeswitching problem, since unseen phonemes in the input space no longer map to nonsensical positions, as it would be the case for the standard embedding-lookup.",
"It has to be noted however, that this only works across languages with similar types of phonemes.",
"Also Gutkin (2017) have applied phonological features to low-resource TTS with fair success.",
"They did however rely on supplementary features, such as dependency parsers and morphological analyzers.",
"Furthermore all of their data and models are proprietary and can therefore not be used to compare results to.",
"In this work, we extend the use of articulatory inputs with 6858 the MAML framework to enable very simple yet well working low-resource TTS that can be applied to almost all modern TTS architectures.",
"We encounter severe instabilities when using MAML on TTS, which make the standard formulation of MAML infeasible to use.",
"Thus we also propose a modification to MAML, which reduces the procedure's complexity.",
"This allows us to create a set of parameters of a model that can be used to fine-tune to a well working single-language single-speaker TTS model with as little as 30 minutes of paired training data available and even enables zero-shot adaptation to unseen languages.",
"We evaluate the success of our approach with both automatic measures and human evaluation.",
"Our contributions are as follows: 1) We show that it is beneficial to train a TTS model on articulatory features rather than on phoneme-identities, even in the standard single-language high-resource case; 2) We introduce a training procedure that is closely related to MAML which allows training a set of parameters for a TTS model that can be fine-tuned in a low resource scenario; 3) We provide insights on how much data and training time are required to fine-tune a model across different languages and speakers simultaneously using said meta-parameters; 4) We show that the meta-parameters can generalize to unseen phonemes and rapidly improve their ability to properly pronounce them when fine-tuning.",
"1 2 Background and Related Work 2.1 Input Representations Character Embeddings The simplest approach to representing text as input to a TTS is using indexes of graphemes to look up embeddings.",
"This is however prone to mistakes.",
"Taylor and Richmond (2020) bring up the example of coathanger .",
"If the TTS is not aware of the morpheme boundary between the coat and the hang , it will be inclined to produce something like [k 2T@I n @ ] rather than the correct [ko U th N@ ].",
"Such a representation of the input will be highly language dependent, since special pronunciation rules rarely hold for more than a single language.",
"The textual input can be augmented by adding information, such as morpheme boundaries, intona-1 All of our code, as well as the checkpoints for a low-resource fine-tuning capable Tacotron 2 and FastSpeech 2 model are publicly available at https://github.com/ DigitalPhonetics/IMS-Toucan .",
"tion phrase boundaries derived from e.g. syntactic parsing as is done in many TTS frontends (Schrder and Trouvain, 2003; Clark et al., 2007; Ebden and Sproat, 2015), or even the semantic identity of the word a character belongs to, using e.g. BERT embeddings (Hayashi et al., 2019).",
"Phoneme Embeddings Rather than looking up embeddings for graphemes, it is often beneficial to use embeddings of phonemes.",
"Phonemizers (Bisani and Ney, 2008; Taylor, 2005; Rao et al., 2015) produce a sequence of phonetic units, which correlate with the segments in the audio much more than raw text.",
"One such standard of phonetic representation which we make use of is the International Phonetic Alphabet (IPA).",
"Using this set of phonetic units alleviates the problems of TTS fine-tuning and transfer-learning to low-resource domains, because the phonetic units should be mostly language independent.",
"Deri and Knight (2016) provide a data driven approach for the grapheme to phoneme conversion task, which performs well on over 500 languages and can be adapted fairly easily to any new low-resource language.",
"There remains however one major challenge: The use of different phoneme sets for each language, leading to completely unseen units in inference or fine-tuning data.",
"Latent Representations Li et al. (2019a) claim that multilinguality in speech recognition and TTS can be achieved by changing the input to a latent representation that is trained across languages.",
"While their results seem very promising, their technique needs training data in all languages it should be applied to, which rules out zero-shot settings.",
"Articulatory Features We fix the shortcoming of not being able to handle unseen phonemes by specifying phonemes in terms of articulatory features such as position (e.g. frontness of the tongue) and category (e.g. voicedness).",
"We show that systems trained on this input can produce a phoneme given nothing but an articulatory description and thus generalize to unseen phonemes.",
"This makes the transfer of knowledge across languages much simpler.",
"A similar approach for the purpose of handling codeswitching has been done in Staib et al. (2020).",
"Our work builds on top of theirs by extending the idea to transfer learning an entire TTS in a new language with minimal data, making use of meta learning on top of articulatory features.",
"The goal of MAML (Finn et al., 2017) is to find a set of parameters, that work well as initialization point for multiple tasks, including unseen ones.",
"The procedure consists of an outer loop and an inner loop.",
"The outer loop starts with a set of parameters, which we will call the Meta Model.",
"The inner loop trains task specific copies of the Meta Model for a low amount of steps.",
"Once the inner loop is complete, the loss for each of the models is calculated, summed, and backpropagated to the original Meta Model by unrolling the inner loop.",
"This includes the very costly calculation of second order derivatives.",
"The Meta Model is then updated and the inner loop starts again.",
"This procedure moves the initialization point closer to the optimal configuration for each of the trained tasks, which generalizes to even unseen tasks.",
"Multiple variants of MAML have been suggested that try to fix the high computational cost of the second order derivatives.",
"The simplest one is called first-order MAML and simply applies the gradient of the task specific model at the end of the inner loop directly to the Meta Model.",
"Other variants are described in Antoniou et al. (2019); Rajeswaran et al. (2019).",
"For the implementation of our method, we use the open source IMS Toucan speech synthesis toolkit, first introduced in (Lux et al., 2021), which is in turn based on the ESPnet end-to-end speech processing toolkit (Watanabe et al., 2018; Hayashi et al., 2020, 2021).",
"Neekhara et al. (2021) show, that it is beneficial to fine-tune a single-speaker model to a new speaker rather than to train a multi-speaker model.",
"Inspired by this, we decided to also use a model that is not conditioned on speakers or on languages rather than a conditioned multi-speaker multi-lingual model and fine-tune it on the data from a new speaker in a new language.",
"In preliminary experimentation we got similar results to them within one language, but found their method to not work across languages.",
"In comparison to the fine-tuning of a simple single speaker model, we found training and fine-tuning a model conditioned on language embeddings and speaker embeddings much more sensitive to the choice of hyperparam-eters.",
"Figure 1 shows an overview of our system, underlining how it is not specific to a certain archi-Figure 1: Overview of the TTS pipeline we use.",
"The top row shows the modality in which the data is at this point in the pipeline.",
"The lower row shows the methods that handle the transitions.",
"Each of the blocks in the lower row can be exchanged easily with other methods that have the same interfaces.",
"tecture, but could instead be used in conjunction with almost all modern TTS methods.",
"Tacotron 2 For our implementation of Tacotron 2 (Shen et al., 2018), we make use of the forward attention with transition agent introduced in Zhang et al. (2018), which uses a CTC-like forward variable (Graves et al., 2006) to promote the quick learning of monotonic alignment between text and speech.",
"To further help with this, we make use of the guided attention loss introduced in Tachibana et al. (2018).",
"FastSpeech 2 To train the parallel FastSpeech 2 model (Ren et al., 2020), annotations of durations for each phoneme are needed.",
"These also have to be generated for the low-resource fine-tuning data.",
"To that end, we generate alignments using the encoder-decoder attention map of a Tacotron 2 model.",
"Following Kim et al. (2020); Shih et al. (2021); Badlani et al. (2021), we apply the Viterbi algorithm to find the most probable monotonic path through the attention map, which significantly improves the quality of the alignments.",
"This is especially important, because we train our FastSpeech 2 model with pitch and energy labels that are averaged over the duration of each individual phoneme to allow for great controllability during inference, as is introduced by ancucki (2021).",
"Incorrect alignments would lead to followup errors such as an unnaturally flat prosody.",
"Furthermore, we make use of the conformer block (Gulati et al., 2020) as the encoder and decoder, rather than the standard transformer (Vaswani et al., 2017).",
"PanPhon The PanPhon resource (Mortensen et al., 2016) can be used to get linguistic specifications of phonemes.",
"It comes with an open-source 6860 tool 2 which we use to convert phonemes into numeric vectors.",
"Each vector encodes one feature per dimension and takes the value of either -1, 0 or 1, putting the features on a scale wherever meaningful.",
"This featureset also includes phonological features which go beyond simple phonetics, such as whether a phoneme is syllabic.",
"Papercup Additionally we make use of the purely articulatory description system of phonemes introduced in Staib et al. (2020), which we will call Papercup features in the following.",
"For the encoding we use one-hot vectors, similar to their implementation.",
"Some of the features, like openness or frontness, should be on a scale rather than one-hot encoded.",
"However since the articulatory vector is fed into a fully connected layer, we leave the reconstruction of this dependency between features for the network to learn.",
"We find that the standard implementation of MAML does not work well for the TTS task.",
"The inner loop needs hundreds of updates in order to make a significant change to the performance of the task specific model.",
"This is probably due to the TTS task being a one-to-many mapping task, where the loss function of measuring the distance to a spectrogram is not an accurate objective for the TTS.",
"For every text, there are infinitely many spectrograms, which could be considered gold data.",
"Those spectrograms could differ in e.g. the speaker who reads the text and how they read the text.",
"Since there are no conditioning signals, the TTS has to update its parameters towards a certain speaker's characteristics in general.",
"However because in our case each task is a different language and a different speaker, the training becomes highly unstable.",
"So ideally we would either need to run MAML's inner loop until convergence, which is generally infeasible, or stabilize the procedure by not allowing the model to adapt further to one task than to the others.",
"To fix this issue, we calculate the Meta Model's loss on one batch per language.",
"We then sum up the losses, backpropagate and update the Meta Model directly using Adam (Kingma and Ba, 2015).",
"This stabilizes the learning procedure, but still allows the model to update its parameters towards a more universal configuration.",
"Since we have to make this simplification to MAML in order to deal with 2 https://github.com/dmort27/panphon the different languages as tasks, we call this procedure language agnostic meta learning (LAML).",
"Ultimately, the model should not care about the language it is fine-tuned in, since it should be close to a universal representation of an acoustic model.",
"To give an exact notion of our modifications: We simplified equation 1 to equation 2, where opt is a gradient descent update, B i is a batch sampled from task i , L is an objective function, is the set of parameters from the Meta Model and i is the set of parameters specific to task i .",
"To the best of our knowledge, we are the first to successfully apply MAML to TTS with languages being the tasks.",
"for t steps do: t = opt (cid:32) t 1 , (cid:88) i L ( i,d , B i ) (cid:33) where i,d =0 = t 1 and for d steps do: i,d = opt ( i,d 1 , L ( i,d 1 , B i )) (1) for t steps do: t = opt (cid:32) t 1 , (cid:88) i L ( t 1 , B i ) (cid:33) (2) 4 Experiments In this section we will go over the experiments we conducted.",
"First we will evaluate the articulatory features on their own in a single language setting using automatic measures.",
"Then we will evaluate the combination of LAML and articulatory features in a cross-lingual setting using both automatic measures and human evaluation.",
"In our experiments we make use of the following datasets: The English Nancy Krebs dataset (16h) from the Blizzard challenge 2011 (Wilhelms-Tricarico et al., 2011; King and Karaiskos, 2011); The German dataset of the speaker Karlsson (29h) from the HUI-Audio-Corpus-German (Puchtler et al., 2021); The Greek (4h), Spanish (24h), Finnish (11h), Russian (21h), Hungarian (10h), Dutch (14h) and French (19h) subsets of the CSS10 dataset (Park and Mulc, 2019).",
"To explore our first hypothesis, we investigate the capabilities of the articulatory phoneme representations to be used in a single-speaker and single-language TTS system.",
"To compare different ways 6861 of embedding the features, we train only the embedding function.",
"As gold data we use the embeddings from a well trained lookup-table based Tacotron 2 model.",
"In table 1 we show the average distances of all articulatory vectors as projected by the embedding function to their identity based embedding counterpart.",
"The distance d between two embedding vectors A and B is defined in equation 3.",
"(3) This distance function is also used as the objective function.",
"The embedding functions are each trained for 3000 epochs using Adam (Kingma and Ba, 2015) with a batchsize of 32.",
"The first column shows the results of the articulatory features being fed into a linear layer that projects them into a 512 dimensional space.",
"The second column shows the results of the articulatory features being fed into a linear layer that projects them into a 100 dimensional space, applies the tanh activation function and then further projects them into a 512 dimensional space.",
"As can be seen from the results, it is beneficial to both concatenate the PanPhon features with the Papercup features despite their overlap and to add a nonlinearity into the embedding function to match the embeddingspace of a well trained Tacotron 2 model.",
"Hence we use this setup in all following experiments.",
"To investigate the impact that the articulatory features have on their own, we train a Tacotron 2 with and without them on the Nancy dataset and compare their training time and final quality.",
"While the model trained on embedding tables shows a clear diagonal alignment of text and spectrogram frames on an unseen test sentence after 2,000 steps, the one trained on articulatory features does so already at 500 steps.",
"This is visualized in figure 2.",
"The decoder of the Tacotron 2 model can only start to learn to decode after the alignment of inputs to outputs is learned.",
"So learning the alignment earlier gives the articulatory model a clear benefit.",
"After training for 80,000 steps however, our own subjective assessment finds no difference in quality between the two.",
"The earlier convergence of the alignment however shows a possible advantage of using the articulatory features on low-resource tasks, as quicker training progress means that training can be stopped earlier, before overfitting on little data becomes too problematic.",
"(a) Proposed Tacotron 2 with articulatory features at 500 steps with a batchsize of 32.",
"(b) Baseline Tacotron 2 with embedding-lookup at 2000 steps with a batchsize of 32.",
"In order to investigate the effectiveness of our proposed LAML procedure, we train a Tacotron 2 model and a FastSpeech 2 model on the full Karlsson dataset as a strong baseline.",
"We also train another Tacotron 2 model and another FastSpeech 2 model on speech in 8 languages with one speaker per language (Nancy dataset and CSS10 dataset) and fine-tune those models on a randomly chosen 30 minute subset from the Karlsson dataset.",
"To our surprise, we did not only match, but even outperform the model trained on 29 hours with the model fine-tuned on just 30 minutes in multiple metrics.",
"As a second baseline we tried to train another meta-checkpoint using the embedding lookup-table approach to also further investigate the effectiveness of the articulatory features.",
"We did however not manage to get such a model to converge to a usable state.",
"This already shows the superiority of 6862 the articulatory feature representations for such a multilingual use-case.",
"Furthermore we tried to fine-tune the well trained English single speaker models from the first experiment on the 30 minutes of German to have another baseline that can be used to measure the impact of the LAML procedure.",
"This setup however also did not yield any usable results.",
"During the fine-tuning process, the model was capable of speaking German with a strong English accent, yet it did not properly learn to speak in the voice of the target speaker.",
"By the time the model learned to speak in the new speaker's voice, it had overfit-ted the 30 minutes of training data and collapsed, producing no more intelligible speech.",
"We conclude that the method proposed in this paper not only improves on the ability to use cross-lingual data easily, but actually enables it in the first place.",
"Both the articulatory features, as well as the LAML pretraining seem necessary to achieve cross-lingual fine-tuning on low-resource data.",
"The texts we use for the following experiments are disjunct from any training data used.",
"Human speech as gold standard is not used, since we are interested in the difference in performance between the systems, not their absolute performance.",
"The close to state-of-the-art performance of the baselines is considered as given, considering their ideal training conditions and use of proven methods.",
"Furthermore, we chose to use German as our benchmark language over an actual low-resource language, since it is much easier to acquire reliable ratings on intelligibility and naturalness for German, than it would be for an actual low-resource language.",
"To compare intelligibility between our baseline models and our low-resource models, we use the word error rate (WER) of an automatic speech recognition system (ASR) as a proxy.",
"We synthesize 100 sentences of German radio news texts taken from the DIRNDL corpus (Eckart et al., 2012) with each of our baselines and corresponding low-resource systems.",
"Table 2 shows WERs that the German IMS-Speech ASR (Denisov and Vu, 2019) achieves on the synthesized data.",
"For both Tacotron 2 and the FastSpeech 2 based system, the WER of the low-resource model is slightly lower than that of the baseline, thus the low-resource models performed slightly better.",
"system outperformed the baseline, we find code-switched segments, where the texts contain names of Russian cities.",
"Since the pretraining data of the low-resource model includes Russian speech, it seems to have not forgotten entirely about what it has seen in the pretraining phase, which in our interpretation confirms the effectiveness of the LAML against the catastrophic-forgetting problem (French, 1999) of regular pretraining.",
"In order to assess the naturalness of the fine-tuned models, we conduct a preference study with 34 native speakers of German.",
"Each participant is shown 12 phonetically balanced samples produced by the Tacotron 2 and FastSpeech 2 models.",
"For every sentence, there is one sample produced by the baseline and one by the low-resource model.",
"The participants are then asked to indicate their subjective overall preference between the two samples.",
"The results for Tacotron 2 are shown in figure 3",
"(a).",
"The low-resource system was the preferred system in more than half of the cases, with an equal rating taking up more than another third, showing a clear preference for the low-resource model over the baseline.",
"The results for FastSpeech 2, as seen in figure 3",
"(b), are a lot more balanced.",
"While the baseline is preferred more often than the low-resource variant, it is not the case in the majority of the ratings.",
"In 56% of the cases, the model fine-tuned on 30 minutes of data was perceived to be as good or better than the model trained on 29 hours.",
"Computational Resources All models were trained on a single NVIDIA A6000 GPU.",
"Training the Tacotron Baseline took 2 days.",
"Training time of the FastSpeech Baseline was 1 day.",
"Training time of the meta-checkpoint was 4 days, finetuning to a new model from the meta-checkpoint however only takes 2 hours.",
"The HiFi-GAN vocoder used to generate all samples took 4 days to train and was not fine-tuned on the unseen data.",
"We did not perform hyperparameter searches and used the suggested default settings for all methods, which worked sufficiently well, but could surely be improved.",
"What is the ideal amount of training steps for fine-tuning?",
"To investigate the amount of update steps needed to fully adapt to the new speaker with the added difficulty of learning a new language, we show the cosine similarity of a speaker embedding of the fine-tuned model to that of the ground truth throughout the fine-tuning process in figure",
"4. The speaker embedding is built according to the ECAPA-TDNN architecture (Desplanques et al., 2020) and provided open source by SpeechBrain (Ravanelli et al., 2021).",
"It is trained on VoxCeleb 1 and 2 (Nagrani et al., 2017, 2019; Chung et al., 2018) which to the best of our knowledge does not overlap with any of the other training and evaluation data we used.",
"We tried to decrease adaptation time further by incorporating said speaker embedding similarity as an additional objective function, similar to Nachmani et al. (2018), we did however see only marginal improvements in the amount of steps needed at the expense of greatly increased training time.",
"5. We removed Dutch and Finnish from the training data of the meta-checkpoint and trained another version of it, to be able to see how it handles all of the now completely unseen phonemes specific to German.",
"While their correct position in plot",
"(a) can be considered given, since it shows the articulatory featurespace, their meaningful positions in plot",
"(b) and",
"(c) show that the meta-checkpoint does not just collapse the vector of the unseen phoneme to the one it is most similar to, but actually generalizes.",
"While their pronunciation when produced does not match the correct pronunciation perfectly, it can be understood in the context of a longer sequence.",
"This is congruent with the results of Staib et al. (2020).",
"During the adaptation phase, the pronunciation of the unseen phonemes rapidly matches the correct pronunciation after less than 100 steps.",
"Does this setup learn the difference between language and speaker?",
"When analyzing the fine-tuned meta-checkpoint, we observed that it seems to link the language of the input to the voice of the speaker.",
"For example when synthesizing an unseen Hungarian text using Tacotron 2, the voice of the synthesis resembles that of the Hungarian female speaker, even though the model has been fine-tuned on the male German speaker and there are no additional conditioning signals.",
"We hypothesize that the LAML procedure induces certain subsets of parameters in the model to be speaker dependent and the encoder of the model priming those parameters purely based on the phoneme sequence.",
"This leads us to believe, that the fine-tuning of all parameters in the model may neither be necessary, nor even the best way of adapting to new data.",
"This also fits the observations of the speaker embedding over time, since the Tacotron model adapts to the new speaker very rapidly.",
"Further investigations into 6864",
"the interactions between parameter groups could allow cutting down the amount of parameters that need to be trained significantly, further reducing the need for training data.",
"How can we bring down FastSpeech 2's data need further?",
"A similar observation regarding language and speaker can be made with FastSpeech 2, however as could be seen from the experiment on naturalness and the training time, the FastSpeech 2 model can benefit more from additional data and training time.",
"This may come down to its nearly twice as high parameter count.",
"So a more effective fine-tuning strategy, that considers some parameters as constants, could benefit the fine-tuning capabilities of the FastSpeech 2 model greatly.",
"Does this work across language families?",
"One limitation to our findings is that we investigated only the transfer of languages that share similar phoneme inventories.",
"It is possible that fine-tuning to a language that uses e.g. the lexical tone rather than pitch accents or word accents would require pretraining in more closely related high-resource languages, such as Chinese.",
"However, as Vu and Schultz (2013) find in their analysis of multilingual ASR, the fast adaptation of an acoustic model trained on multiple languages to unseen languages works well, even across different language families.",
"We thus believe that the technique and analysis presented in this paper also holds across language families and types.",
"In this paper, we show an approach for training a model in a language for which only 30 minutes",
"of data are available by making use of articulatory features and language agnostic meta learning.",
"The main takeaways from our work are as follows: Articulatory Features for TTS Using articulatory features as the input representation to a TTS system enables the use of multilingual data without the need for increased architectural complexity, such as language specific projection spaces.",
"It is furthermore beneficial to use even in single-language scenarios, since the knowledge sharing between phonemes makes the TTS system converge much earlier to an usable state during training.",
"MAML on TTS Applying MAML to TTS does not work well.",
"If we however remove the inner loop, we are able to pretrain a low-resource capable checkpoint for TTS.",
"This modification not only makes it work, it also simplifies the formulation.",
"Zero-shot capabilities The use of articulatory features enables zero-shot inference on unseen phonemes.",
"This is further enhanced by the LAML training procedure.",
"The implications of this are particularly interesting for codeswitching, as Staib et al. (2020); Wells et al. (2021) have pointed out previously.",
"Using these two techniques in conjunction could be used to reduce the problem of codeswitching to a problem of token-wise language identification.",
"We would like to thank the anonymous reviewers for their insightful feedback and suggestions.",
"This work was funded by the Carl Zeiss Foundation."
] | [
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Human language understanding operates at multiple levels of granularity (e.g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined.",
"However, existing deep models with stacked layers do not explicitly model any sort of hierarchical process.",
"This paper proposes a recursive Transformer model based on differentiable CKY style binary trees to emulate the composition process.",
"We extend the bidirectional language model pre-training objective to this architecture, attempting to predict each word given its left and right abstraction nodes.",
"To scale up our approach, we also introduce an efficient pruned tree induction algorithm to enable encoding in just a linear number of composition steps.",
"Experimental results on language modeling and unsupervised parsing show the effectiveness of our approach.",
"1 1 Introduction The idea of devising a structural model of language capable of learning both representations and meaningful syntactic structure without any human-annotated trees has been a long-standing but challenging goal.",
"Across a diverse range of linguistic theories, human language is assumed to possess a recursive hierarchical structure (Chomsky, 1956, 2014; de Marneffe et al., 2006) such that lower-level meaning is combined to infer higher-level semantics.",
"Humans possess notions of characters, words, phrases, and sentences, which children naturally learn to segment and combine.",
"Pretrained language models such as BERT (De-vlin et al., 2019) have achieved substantial gains Equal contribution.",
"across a range of tasks.",
"However, they simply apply layer-stacking with a fixed depth to increase the modeling power (Bengio, 2009; Salakhutdinov, 2014).",
"Moreover, as the core Transformer component (Vaswani et al., 2017) does not capture positional information, one also needs to incorporate additional positional embeddings.",
"Thus, pretrained language models do not explicitly reflect the hierarchical structure of linguistic understanding.",
"Inspired by Le and Zuidema (2015), Maillard et al. (2017) proposed a fully differentiable CKY parser to model the hierarchical process explicitly.",
"To make their parser differentiable, they primarily introduce an energy function to combine all possible derivations when constructing each cell representation.",
"However, their model is based on Tree-LSTMs (Tai et al., 2015; Zhu et al., 2015) and requires O ( n 3 ) time complexity.",
"Hence, it is hard to scale up to large training data.",
"In this paper, we revisit these ideas, and propose a model applying recursive Transformers along differentiable trees (R2D2).",
"To obtain differentiability, we adopt Gumbel-Softmax estimation (Jang et al., 2017) as an elegant solution.",
"Our encoder parser operates in a bottom-up fashion akin to CKY parsing, yet runs in linear time with regard to the number of composition steps, thanks to a novel pruned tree induction algorithm.",
"As a training objective, the model seeks to recover each word in a sentence given its left and right syntax nodes.",
"Thus, our model does not require any positional embedding and does not need to mask any words during training.",
"Figure 1 presents an example binary tree induced by our method: Without any syntactic supervision, it acquires a model of hierarchical construction from the word-piece level to words, phrases, and finally the sentence level.",
"We make the following contributions: Our novel CKY-based recursive Transformer on differentiable trees model is able to learn both representations and tree structure (Section 2.1).",
"We propose an efficient optimization algorithm to scale up our approach to a linear number of composition steps (Section 2.2).",
"We design an effective pre-training objective, which predicts each word given its left and right syntactic nodes (Section 2.3).",
"For simplicity and efficiency reasons, in this paper we conduct experiments only on the tasks of language modeling and unsupervised tree induction.",
"The experimental results on language modeling show that our model significantly outperforms baseline models with same parameter size even in fewer training epochs.",
"At unsupervised parsing, our model as well obtains competitive results.",
"Figure 2 : Chart data structure.",
"There are two alternative ways of generating T 1 , 3 : combining either ( T 1 , 2 , T 3 , 3 ) or ( T 1 , 1 , T 2 , 3 ).",
"Differentiable Tree.",
"We follow Maillard et al. (2017) in defining a differentiable binary parser using a CKY-style (Cocke, 1969; Kasami, 1966; Younger, 1967) encoder.",
"Informally, given a sentence S = { s 1 , s 2 , ..., s n } with n words or word-pieces, Figure 2 shows the chart data structure T , where each cell T i,j is a tuple (cid:104) e i,j , p i,j , (cid:101) p i,j (cid:105) , e i,j is a vector representation, p i,j is the probability of a single composition step, and (cid:101) p i,j is the probability of the subtree at span [ i, j ] over sub-string s i : j .",
"At the lowest level, we have terminal nodes T i,i with e i,i initialized as embeddings of inputs s i , while p i,i and (cid:101) p i,i are set to one.",
"When j > i , the representation e i,j is a weighted sum of intermediate combinations c ki,j , defined as: c ki,j , p ki,j = f ( e i,k , e k +1 ,j ) (1) (cid:101) p ki,j = p ki,j (cid:101) p i,k (cid:101) p k +1 ,j (2) i,j = GUMBEL (log( (cid:101) p i,j )) (3) e i,j = [ c ii,j , c i +1 i,j , ..., c j 1 i,j ] i,j (4) [ p i,j , (cid:101) p i,j ] = (cid:124) i,j [ p i,j , (cid:101) p i,j ] (5) Here, k is a split point from i to j 1 , f ( ) is a composition function that we shall further define later on, p ki,j and (cid:101) p ki,j denote the single step combination probability and the subtree probability, respectively, at split point k , p i,j and (cid:101) p i,j are the concatenation of all p ki,j or (cid:101) p ki,j values, and GUMBEL is the Straight-Through Gumbel-Softmax operation of Jang et al. (2017) with temperature set to one.",
"The [ , ] notation denotes stacking of tensors.",
"Figure 3 : Recursive Transformer-based encoder.",
"Recursive Transformer.",
"Figure 3 provides a schematic overview of the composition function f ( ) , comprising N Transformer layers.",
"Taking c ki,j and p ki,j as an example, the input is a concatenation of two special tokens [SUM] and [CLS] , the left cell e i,k , and the right cell e k +1 ,j .",
"We also add role embeddings ( [LEFT] and [RIGHT] ) to the left and right inputs, respectively.",
"Thus, the input consists of four vectors in R d .",
"We denote as h [SUM] , h [CLS] , h i,k , h k +1 ,j R d the hidden state of the output of N Transformer layers.",
"This is followed by a linear layer over h [SUM] to obtain p ki,j = ( W p h [SUM] + b p ) , (6) where W p R 1 d , b p R , and refers to sigmoid activation.",
"Then, c ki,j is computed as w ki,j = softmax( W w h [CLS] + b w ) c ki,j = [ h i,k , h k +1 ,j ] w ki,j , (7) where W w R 2 d with w ki,j R 2 capturing the respective weights of the left and right hidden states h i,k and h k +1 ,j , and the final c ki,j is a weighted sum of h i,k and h k +1 ,j .",
"Tree Recovery.",
"As the Straight-Through Gumbel-Softmax picks the optimal splitting point k at each cell in practice, it is straightforward to recover the complete derivation tree, Tree ( T 1 ,n ), from the root node T 1 ,n in a top-down manner recursively.",
"As the core computation comes from the composition function f ( ) , our pruned tree induction algorithm aims to reduce the number of composition calls from O ( n 3 ) in the original CKY algorithm to linear.",
"Our intuition is based on the conjecture that locally optimal compositions are likely to be retained and participate in higher-level feature combination.",
"Specifically, taking T 2 in Figure 4",
"(c) as an example, we only pick locally optimal nodes from the second row of T 2 .",
"If T 24 , 5 is locally optimal and non-splittable, then all the cells highlighted in dark gray in",
"(d) may be pruned, as they break span [4 , 5] .",
"For any later encoding, including higher-level ones, we can merge the nodes and treat T 24 , 5 as a new non-splittable terminal node (see",
"(e) to",
"(g)).",
"Figure 4 walks through the steps of processing a sentence of length 6, where s i : j denotes a substring from s i to s j .",
"Algorithm 1 constructs our chart table T sequentially row-by-row.",
"Let t be the time step and m be the pruning threshold.",
"First, we invoke TREEINDUCTION ( T , m ) , and compute a row of cells at each time step when t < m as in regular CKY parsing, leading to result",
"(b) in Figure 4.",
"When t m , we call PRUNING ( T , m ) in Line 15.",
"As mentioned, the PRUNING function aims to find the locally optimal combination node in T , prunes some cells, and returns a new table omitting the pruned cells.",
"Algorithm 2 shows how we FIND the locally optimal combination node.",
"Again, the candidate set for the locally optimal node is the second row of T , and we also take advantage of the subtrees derived from all nodes in the m -th row to limit the candidate set.",
"Lines 6 to 9 in Algorithm 2 generate the candidate set.",
"Each candidate must be in the second row of T and also must be used in a subtree of any node in the m -th row.",
"Given the candidate set, we find the least ambiguous one as the optimal selection (Lines 11 to Figure 4 : Example of encoding.",
"(a) Initialized chart table.",
"(b) Row-by-row encoding up to pruning threshold m .",
"(c) For each cell in the m -th row, recover its subtree and collect candidate nodes, each of which must appear in the subtree and also must be in the 2 nd row, e.g., the tree of T 23 , 5 is within the dark line, and the candidate node is T 24 , 5 .",
"(d) Find locally optimal node, which is T 24 , 5 here, and treat span s 4:5 as non-splittable.",
"Thus, the dark gray cells become prunable.",
"(e) Construct a new chart table T 3 treating cell T 24 , 5 as a new terminal node and eliminating the prunable cells.",
"(f) Compute empty cells in m -th row.",
"(g) Keep pruning and growing the tree until no further empty cells remain.",
"(h) Final discrete chart table.",
"17), i.e., the node with maximum own probability while adjacent bi-gram node probabilities (Lines 13 and 14 ) are as low as possible.",
"After selecting the best merge point u , cells in {T ti,j | j = u } {T ti,j | i = u + 1 } are pruned (highlighted in dark gray in",
"(d)), and we generate a new table T t +1 by removing pruned nodes (Lines 4 to 9 in Algorithm 1).",
"Then we obtain",
"(e), and compute the empty cells on the m -th row of T 3 to obtain",
"(f).",
"We continue with the loop in Line 13, trigger PRUNING again, and obtain a new table T t +1 , and then fill empty cells on the m -th row T t +1 .",
"Continuing with the process until all cells are computed, as shown in",
"(g), we finally obtain a discrete chart table as given in",
"(h).",
"In terms of the time complexity, when t m , there are at most m cells to update, so the complexity of each step is less than O ( m 2 ) .",
"When t m , the complexity is O ( t 3 ) O ( m 2 t ) .",
"Thus, the overall times to call the composition function is O ( m 2 n ) , which is linear considering m is a con-stant.",
"Different from the masked language model training of BERT , we directly minimize the sum of all negative log probabilities of all words or word-pieces",
"Figure 5 : The objective for our pretrained model.",
"s i given their left and right contexts.",
"min n (cid:88) i =1 log p ( s i | s 1: i 1 , s i +1: n ) (8) As shown in Figure 5, after invoking our recursive encoder on a sentence S , we directly use e 1 ,i 1 and e i +1 ,n as the left and right contexts, respectively, for each word s i .",
"To distinguish from the encoding task, the input consists of a concatenation of a special token [MASK] , e 1 ,i 1 , and e i +1 ,n .",
"We apply the same composition function f ( ) as in Figure 3, and feed h [MASK] through an output softmax to predict the distribution of s i over the complete vocabulary.",
"Finally, we compute the cross-entropy over the prediction and ground truth distributions.",
"In cases where e 1 ,i 1 or e i +1 ,n is missing due to the pruning algorithm in Section 2.2, we simply use the left or right longest adjacent non-empty cell.",
"For example, T x,i 1 means the longest nonempty cell assuming we cannot find any non-empty T x (cid:48) ,i 1 for all x (cid:48) < x .",
"Analogously, T i +1 ,y is defined as the longest non-empty right cell.",
"Note that although the final table is sparse, the sentence representation e 1 ,n is always established.",
"As our approach (R2D2) is able to learn both representations and intermediate structure, we evaluate its representation learning ability on bidirectional language modeling and evaluate the intermediate structures on unsupervised parsing.",
"Baselines and Evaluation.",
"As the objective of our model is to predict each word with its left and right context, we use the pseudo-perplexity (PPPL) metric of Salazar et al. (2020) to evaluate bidirectional language modeling.",
"L ( S ) = 1 n n (cid:88) i =1 log P ( s i | s 1: i 1 , s i +1: n , ) PPPL ( S ) = exp 1 NN (cid:88) j =1 L ( S j ) PPPL is a bidirectional version of perplexity, establishing a macroscopic assessment of the model's ability to deal with diverse linguistic phenomena.",
"We compared our approach with SOTA autoen-coding and autoregressive language models capable of capturing bidirectional contexts, including BERT , XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2020).",
"For a fair apples to apples comparison, all models use the same vocabulary and are trained from scratch on a language modeling corpus.",
"The models are all based on the open source Transformers library 2 .",
"To compute PPPL for models based on sequential Transformers, for each word s i , we only mask s i while others remain visible to predict s i .",
"When we evaluate our R2D2 model, for each word s i , we treat the left s 1: i 1 and right s i +1: n as two complete sentences separately, then encode them separately, and pick the 2 https://github.com/huggingface/transformers #param #layer #epoch cplx PPPL BERT 46M 3 10 O ( n 2 ) 441.42 XLNet 46M 3 10 O ( n ) 301.87 ALBERT 46M 12 10 O ( n 2 ) 219.20 XLNet 116M 12 10 O ( n ) 127.74 BERT 109M 12 10 O ( n 2 ) 103.54 T-LSTM ( m =4) 46M 1 10 O ( n ) 820.57 Ours ( m =4) 45M 3 10 O ( n ) 83.10 Ours ( m =8) 45M 3 10 O ( n ) 57.40 BERT 46M 3 60 O ( n 2 ) 112.17 XLNet 46M 3 60 O ( n ) 105.64 ALBERT 46M 12 60 O ( n 2 ) 71.52 XLNet 116M 12 60 O ( n ) 59.74 BERT 109M 12 60 O ( n 2 ) 44.70 Ours ( m =4) 45M 3 60 O ( n ) 55.70 Ours ( m =8) 45M 3 60 O ( n ) 54.60 Table 1 : Comparison with state-of-the-art models trained from scratch on WikiText-2 with different settings (number of Transformer layers and training epochs).",
"m is the pruning threshold.",
"root nodes as the final representations of left and right contexts.",
"In the end, we predict word s i by running our Transformers as in Figure 5.",
"Data.",
"The English language WikiText-2 corpus (Merity et al., 2017) serves as training data.",
"The dataset is split at the sentence level, and sentences longer than 128 after tokenization are discarded (about 0.03% of the original data).",
"The total number of sentences is 68,634, and the average sentence length is 33.4.",
"Hyperparameters.",
"The tree encoder of our model uses 3-layer Transformers with 768-dimensional embeddings, 3,072-dimensional hidden layer representations, and 12 attention heads.",
"Other models based on the Transformer share the same setting but vary on the number of layers.",
"Training is conducted using Adam optimization with weight decay with a learning rate of 5 10 5 .",
"The batch size is set to 8 for m = 8 and 32 for m = 4 , though we also limit the maximum total length for each batch, such that excess sentences are moved to the next batch.",
"The limit is set to 128 for m = 8 and 512 for m = 4 .",
"It takes about 43 hours for 10 epochs of training with m = 8 and about 9 hours with m = 4 , on 8 v100 GPUs.",
"Table 1 presents the results of all models with different parameters.",
"Our model outperforms other models with the same parameter size and number emb.",
"Table 2 : Training time for one epoch on a single v100 GPU, where emb.",
"and hid.",
"represent the dimensions of word embeddings and hidden state respectively, and lay.",
"is the number of transformer layers.",
"means proportionally estimated time.",
"of training epochs.",
"These results suggest that our model architecture utilizes the training data more efficiently.",
"Comparing the different pruning thresholds m = 4 and m = 8 (last two rows), the two models actually converge to a similar place after 60 epochs, confirming the effectiveness of the pruned tree induction algorithm.",
"We also replace Transformers with Tree-LSTMs as in Jang et al. (2017), denoted as T-LSTM, finding that the perplexity is significantly higher compared to other models.",
"The best score is from the BERT model with 12 layers at epoch 60.",
"Although our model has a linear time complexity, it is still a sequential encoding model, and hence its training time is not comparable to that of fully parallelizable models.",
"Thus, we do not have results of 12-layer Transformers in Table 1.",
"The experimental results comparing models with the same parameter size suggest that our model may perform even better with further deep layers.",
"Table 2 shows the training time of our R2D2 with and without pruning.",
"The last row is proportionally estimated by running the small setting ( 12 12 1 ).",
"It is clear that it is not feasible to run our R2D2 without pruning.",
"We next assess to what extent the trees that naturally arise in our model bear similarities with human-specified parse trees.",
"Baselines and Evaluation.",
"For comparison, we further include four recent strong models for unsupervised parsing with open source code: BERT masking (Wu et al., 2020), Ordered Neurons (Shen et al., 2019), DIORA (Drozdov et al., 2019) and C-PCFG (Kim et al., 2019a).",
"Following Htut et al. (2018), we train all systems on a training set consisting of raw text, and evaluate and report the results on an annotated test set.",
"As an evaluation metric, we adopt sentence-level unlabeled F 1 computed using the script from Kim et al. (2019a).",
"We compare against the non-binarized gold trees per convention.",
"The best checkpoint for each system is picked based on scores on the validation set.",
"As our model is a pretrained model based on word-pieces, for a fair comparison, we test all models with two types of input: word level (W) and word-piece level (WP) 3 .",
"To support word-piece level evaluation, we convert gold trees to word-piece level trees by simply breaking each terminal node into a non-terminal node with its word-pieces as terminals, e.g., (NN discrepancy) into (NN (WP disc) (WP ##re) (WP ##pan) (WP ##cy).",
"We set the pruning threshold m to 8 for our tree encoder.",
"To support a word-level evaluation, since our model uses word-pieces, we force it to not prune or select spans that conflict with word spans during prediction, and then merge word-pieces into words in the final output.",
"However, note that this constraint is only used for word-level prediction.",
"For training, we use the same hyperparame-ters as in Section 3.1.1.",
"Our model pretrained on WikiText-2 is finetuned on the training set with the same unsupervised loss objective.",
"For Chinese, we use a subset of Chinese Wikipedia for pretraining, specifically the first 100,000 sentences shorter than 150 characters.",
"Data.",
"We test our approach on the Penn Treebank (PTB) (Marcus et al., 1993) with the standard splits (2-21 for training, 22 for validation, 23 for test) and the same preprocessing as in recent work (Kim et al., 2019a), where we discard punctuation and lower-case all tokens.",
"To explore the universality of the model across languages, we also run experiments on Chinese Penn Treebank (CTB) 8 (Xue et al., 2005), on which we also remove punctuation.",
"Note that in all settings, the training is conducted entirely on raw unannotated text.",
"Table 3 provides the unlabeled F 1 scores of all systems on the WSJ and CTB test sets.",
"It is clear that all systems perform better than left/right branching and random trees.",
"Word-level C-PCFG (W) performs best on both the WSJ and CTB test sets when measuring against word-level gold standard trees.",
"Our system performs better than ON-LSTM (W), but worse than DIORA (W) and C-PCFG (W).",
"Still, 3 As DIORA relies on ELMO word embeddings, to support word-piece level inputs, we use BERT word-piece embeddings instead.",
"Table 3 : Unsupervised parsing results with word (W) or word-piece (WP) as input.",
"Values with are taken from Kim et al. (2019a).",
"F 1 (M) describes the max.",
"score of 4 runs with different random seeds.",
"The F 1 column shows results of our runs with a random seed.",
"The bottom three systems take word-pieces as input, and are also measured against word-piece level golden trees.",
"this is a remarkable result.",
"Note that models such as C-PCFG are specially designed for unsupervised parsing, e.g., adopting 30 nonterminals, 60 preter-minals, and a training objective that is well-aligned with unsupervised parsing.",
"In contrast, the objective of our model is that of bi-directional language modeling, and the derived binary trees are merely a by-product of our model that happen to emerge naturally from the model's preference for structures that are conducive to better language modeling.",
"Another factor is the mismatch between our training and evaluation, where we train our model at the word-piece level, but evaluate against word-level gold trees.",
"For comparison, we thus also considered DIORA (WP), C-PCFG (WP), and our system all trained on word-piece inputs, and evaluated against word-piece level gold trees.",
"The last three lines show the results, with our system achieving the best F 1 .",
"As breaking words into word-pieces introduces word boundaries as new spans, while word boundaries are easier to recognize, the overall F 1 score may increase, especially on Chinese.",
"Analysis.",
"In order to better understand why our model works better when evaluating on word-piece level golden trees, we compute the recall of constituents following Kim et al. (2019b) and Drozdov et al. (2020).",
"Besides standard constituents, we also compare the recall of word-piece chunks and (WP) WD NNP NP VP SBARWSJDIORA 81.65 77.83 71.24 17.30 22.16 C-PCFG 74.26 66.44 65.01 23.63 40.40 Ours 99.24 86.76 72.59 24.74 39.81 CTBC-PCFG 89.34 -46.74 39.53 Ours 97.16 -61.26 37.90 Table 4 : Recall of constituents and words at word-piece level.",
"WD means word.",
"proper noun chunks.",
"Proper noun chunks are extracted by finding adjacent unary nodes with same parent and tag NNP.",
"Table 4 reports the recall scores for constituents and words on the WSJ and CTB test sets.",
"Our model and DIORA perform better for small semantic units, while C-PCFG better matches larger semantic units such as VP and SBAR.",
"The recall of word chunks (WD) of our system is almost perfect and significantly better than for other algorithms.",
"Please note that all word-piece level models are trained fairly without using any boundary information.",
"Although it is trivial to recognize English word boundaries among word-pieces using rules, this is non-trivial for Chinese.",
"Additionally, the recall of proper noun segments is as well significantly better for our model compared to other algorithms.",
"We compared examples of trees inferred by our model with the corresponding ground truth constituency trees (see Appendix), encountering reasonable structures that are different from the constituent structure posited by the manually defined gold trees.",
"Experimental results of previous work (Drozdov et al., 2020; Kim et al., 2019a) also show significant variance with different random seeds.",
"Thus, we hypothesize that an isomorphy-focused F 1 evaluation with respect to gold constituency trees is insufficient to evaluate how reasonable the induced structures are.",
"In contrast, dependency grammar encodes semantic and syntactic relations directly, and has the best interlingual phrasal cohesion properties (Fox, 2002).",
"Therefore, we introduce dependency compatibility as an additional metric and re-evaluate all system outputs.",
"Baselines and Data.",
"As our approach is a word-piece level pretrained model, to enable a fair comparison, we train all models on word-pieces and WSJ CTB Model % all % n 10 % n 20 % n 40 % all % n 10 % n 20 % n 40 BERT-MASK (W) 53.53 77.03 55.46 44.66 48.56 68.89 47.27 36.62 ON-LSTM (W) 61.43 77.05 62.99 55.94 36.48 58.57 34.08 26.59 DIORA (W) 67.76 78.08 68.80 64.15 C-PCFG (W) 72.74 85.10 74.65 67.19 64.41 75.54 65.89 58.16 DIORA (WP) 54.73 68.80 55.68 49.22 C-PCFG (WP) 67.18 83.09 68.20 61.03 62.25 74.98 61.04 52.52 Ours (WP) 69.29 80.29 70.29 64.79 64.74 74.42 63.86 59.20 Table 5 : Compatibility with dependency trees.",
"(W) denotes word level inputs, (WP) refers to word-piece level inputs.",
"% all denotes the accuracy on all test sentences, while % n x is the accuracy on sentences of up to x words.",
"Values with are evaluated with predicted trees from Kim et al. (2019a) Figure 6 : Examples of compatible and incompatible spans.",
"learn models with the same settings as in the original papers.",
"Evaluation at the word-piece level reveals the model's ability to learn structure from a smaller granularity.",
"In this section, we keep the word-level gold trees unchanged and invoke Stanford CoreNLP (Manning et al., 2014) to convert the WSJ and CTB into dependency trees.",
"Evaluation.",
"Our metric is based on the notion of quantifying the compatibility of a tree by counting how many spans comply with dependency relations in the gold dependency tree.",
"Specifically, as illustrated in Figure 6, a span is deemed compatible with the ground truth if and only if this span forms an independent subtree.",
"Formally, given a gold dependency tree D , we denote as S ( D ) the raw token sequence for D .",
"Considering predicting a binary tree for word-level input, predicted spans in the binary tree are denoted as Z .",
"For any span z Z , the subgraph of D including nodes in z and directional edges between them is referred to as G z .",
"O ( G z ) is defined as the set of nodes with parent nodes not in G z and I ( G z ) denotes the set of nodes whose child nodes are not in G z .",
"Thus, |O ( G z ) | and |I ( G z ) | are the out-degree and in-degree of the subgraph G z .",
"Let I( z ) denote whether z is valid, defined as I ( z ) (cid:26) 1 , |O ( G z ) | = 1 and I ( G z ) O ( G z ) 0 , otherwise.",
"For binary tree spans for word-piece level input, if z breaks word-piece spans, then I( z ) = 0 .",
"Otherwise, word-pieces are merged to words and the word-level logic is followed.",
"Specifically, to make the results at the word and word-piece levels comparable, I( z ) is forced to be zero if z only covers a single word.",
"The final compatibility for Z is (cid:80) z Z I( z ) |S ( D ) | 1 .",
"Table 5 lists system results on the WSJ and CTB test sets.",
"% all refers to the accuracy on all test sentences, while % n x is the accuracy on sentences with up to x words.",
"It is clear that the smaller granularity at the word-piece level makes this task harder.",
"Our model performs better than other systems at the word-piece level on both English and Chinese and even outperforms the baselines in many cases at the word level.",
"It is worth noting that the result is evaluated on the same binary predicted trees as we use for unsupervised constituency parsing, yet our model outperforms baselines that perform better in Table 3.",
"One possible interpretation is that our approach learns to prefer structures different from human-defined phrase structure grammar but self-consistent and compatible with a tree structure.",
"To further understand the strengths and weaknesses of each baseline, we analyzed the compatibility of different sentence length ranges.",
"Interestingly, we find that our approach performs better on long sentences compared with C-PCFG at the word-piece level.",
"This shows that a bidirectional language modeling objective can learn to induce accurate structures even on very long sentences, on which custom-tailored methods may not work as well.",
"Pre-trained models.",
"Pre-trained models have achieved significant success across numerous tasks.",
"ELMo (Peters et al., 2018), pretrained on bidirectional language modeling based on bi-LSTMs, was the first model to show significant improvements across many downstream tasks.",
"GPT (Rad-ford et al., 2018) replaces bi-LSTMs with a Transformer (Vaswani et al., 2017).",
"As the global attention mechanism may reveal contextual information, it uses a left-to-right Transformer to predict the next word given the previous context.",
"BERT (De-vlin et al., 2019) proposes masked language modeling (MLM) to enable bidirectional modeling while avoiding contextual information leakage by directly masking part of input tokens.",
"As masking input tokens results in missing semantics, XLNET (Yang et al., 2019) proposes permuted language modeling (PLM), where all bi-directional tokens are visible when predicting masked tokens.",
"However, all aforementioned Transformer based models do not naturally capture positional information on their own and do not have explicit interpretable structural information, which is an essential feature of natural language.",
"To alleviate the above shortcomings, we extend pre-training and the Transformer model to structural language models.",
"Representation with structures.",
"In the line of work on learning a sentence representation with structures, Socher et al. (2011) proposed the first neural network model applying recursive autoencoders to learn sentence representations, but their approach constructs trees in a greedy way, and it is still unclear how autoencoders can perform against large pre-trained models (e.g., BERT ).",
"Yogatama et al. (2017) jointly train their shift-reduce parser and sentence embedding components.",
"As their parser is not differentiable, they have to resort to reinforcement training, but the learned structures collapse to trivial left/right branching trees.",
"The work of URNNG (Kim et al., 2019b) applies variational inference over latent trees to perform unsupervised optimization of the RNNG (Dyer et al., 2016), an RNN model that estimates a joint distribution over sentences and trees based on shift-reduce operations.",
"Maillard et al. (2017) propose an alternative approach, based on CKY parsing.",
"The algorithm is made differentiable by using a soft-gating approach, which approximates discrete candidate selection by a probabilistic mixture of the constituents available in a given cell of the chart.",
"In this paper, we have proposed an efficient CKY-based recursive Transformer to directly model hierarchical structure in linguistic utterances.",
"We have ascertained the effectiveness of our approach on language modeling and unsupervised parsing.",
"With the help of our efficient linear pruned tree induction algorithm, our model quickly learns interpretable tree structures without any syntactic supervision, which yet prove highly compatible with human-annotated trees.",
"As future work, we are investigating pre-training our model on billion word corpora as done for BERT , and fine-tuning our model on downstream tasks.",
"We thank Liqian Sun, the wife of Xiang Hu, for taking care of their newborn baby during critical phases, which enabled Xiang to polish the work and perform experiments."
] | [
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"objective",
"other"
] |
[
"Given the diversity of the candidates and complexity of job requirements, and since interviewing is an inherently subjective process, it is an important task to ensure consistent, uniform, efficient and objective interviews that result in high quality recruitment.",
"We propose an interview assistant system to automatically, and in an objective manner, select an optimal set of technical questions (from question banks) personalized for a candidate.",
"This set can help a human interviewer to plan for an upcoming interview of that candidate.",
"We formalize the problem of selecting a set of questions as an integer linear programming problem and use standard solvers to get a solution.",
"We use knowledge graph as background knowledge in this formulation, and derive our objective functions and constraints from it.",
"We use candidate's resume to personalize the selection of questions.",
"We propose an intrinsic evaluation to compare a set of suggested questions with actually asked questions.",
"We also use expert interviewers to comparatively evaluate our approach with a set of reasonable baselines.",
"A large multi-national IT company added roughly 70,000 employees in FY2018-19.",
"1 Assuming an average interview time of 30 minutes, 3 interviewers in each interview, and 4 candidates interviewed for every position, implies approximately 420,000 person-hours were spent in one year just on conducting the interviews.",
"Given the diversity of the candidates and complexity of job requirements, and considering that interviewing is an inherently human and subjective process, it is a mammoth task to ensure consistent, uniform, efficient and objective interviews that result in high quality re-1 https://www.tcs.com/content/dam/tcs/investor-relations/financial-statements/2018-19/ar/annual-report-2018-2019.pdf cruitment.",
"AI and ML technologies are increasingly playing important roles in helping improve recruitment quality, e.g., Faliagka et al. (2012), Javed et al. (2015), Palshikar et al. (2017), although ethical issues are emerging.",
"2 In this paper, we consider one particular way to assist human interviewers in improving the quality of their interviews and in reducing subjectivity.",
"Before conducting an interview, an interviewer typically studies the candidate's resume, noting salient points about her education, skills, job history, roles, projects, tasks handled etc.",
"The interviewer also notes the apparent strengths and weaknesses of the candidate, as also the extent to which she matches (and does not match) the job profile for which she will be interviewed.",
"In short, the interviewer builds an a priori rough mental profile of the candidate, and prepares a mental plan of how to interview her.",
"Such a plan includes preparing an unordered set of questions that the interviewer would like to ask the candidate.",
"In this paper, we propose an interview assistant system to automatically, and in an objective, unbiased manner, build such a set of questions for a human interviewer, which can be part of a plan for an upcoming interview.",
"We assume we have question banks from where questions can be selected.",
"Note that such a plan is static , and the actual sequence of questions asked by an interviewer may diverge from the static plan, due to dynamic and contextual reasons observed during the flow of the interview.",
"Such reasons include",
"(i) mismatch between the interviewer's prior impression about the strengths of the candidate and the quality of the candidate's actual answers; (ii) the questions asked by other interviewers, if they are present.",
"Nevertheless, such a plan generated by the system is still useful, as it reduces the cognitive load on the interviewer, and brings some standardization 2 https://hbr.org/2019/04/the-legal-and-ethical-implications-of-using-ai-in-hiring and objectivity to an interview.",
"Having a system-suggested set of questions, personalized for a particular candidate, before starting the interview is useful for the interviewer.",
"The questions help in getting to a good start and also give ideas about where to focus during the interview.",
"The novel contributions of this paper are as follows.",
"We formalize the problem of selecting a set of questions as an integer programming optimization problem and use standard solvers to get a solution.",
"We use knowledge graph as background knowledge, and formulate our objective functions and constraints from it.",
"To our knowledge, this is the first paper to address the problem of creating an optimal interview plan.",
"We report experiments on a real dataset of candidates and interview questions and compare against a state-of-the-art baseline using both intrinsic and human study based evaluation.",
"The rest of the paper is organized as follows.",
"Section 2 summarizes related work, Section 3 formulates the optimization problem, Section 4 gives details of our novel evaluation measure, Sec-tionrefsec:nlp describes the use of NLP techniques to build the requisite resources, Section 6 describes the baselines used for comparison, Section 7 describes the our experimental results, and Section 8 concludes the paper.",
"Work in this paper is close to the field of computerized adaptive testing (CAT) (Van der Linden and Glas, 2000), where the task is to select questions (also called items ) on-the-fly from a question bank, depending on how the student has answered the questions so far (i.e., adjusting to her ability level), with goals of creating shorter tests that yield better differentiation among students.",
"CAT techniques are used in large-scale general online examinations like GRE, and in specialized medical licensing or certification examinations, e.g., in clinical pathology, emergency medicine, and pharmacy.",
"We are not aware of any work on applying CAT techniques to interviews.",
"We first outline the key differences between our work and CAT, which stem from the obvious differences between interviews and examinations.",
"Interviews are not really examinations, but are human, face-to-face, oral, and two-way interactions among a single candidate and possibly multiple interviewers.",
"There is no set question paper, no rigid time-limit and interview questions need short free-form textual or spoken answers whereas CAT deals mostly with examinations administering multiple-choice questions.",
"Unlike an examination, the interactions are two-way; e.g., the candidate can ask for clarification about a question.",
"Goals of an interview are different from that of an examination, e.g., assessing fitment for job position requiring multiple skills, rather than assessing depth and breadth in a fixed subject.",
"Students are anonymous in CAT, whereas interviewers have detailed knowledge about the candidate; e.g., through her resume.",
"CAT is about a dynamically assembled, personalized, sequence of questions which are dependent on the student's answers so far, whereas in this paper we deal with a static one-time selection of interview questions, with no dynamic adjustment as per the candidate's answers i.e., in this paper we cannot estimate the candidate's latent abilities, since we do not have her answers.",
"This prevents a direct comparison of our work with most CAT techniques.",
"See Han (2018) for a review of CAT research.",
"The key aspects of CAT are: item selection criteria, content balancing (ensuring coverage of all sub-areas in the subject) and item exposure control (using randomization to prevent excessive item reuse across multiple examinations).",
"Many item selection approaches are formulated using item response theory and use information-theoretic criteria; e.g., Fisher information (Weiss, 1982), efficiency balanced information (EBI) (Han, 2012), Kullback-Liebler information (Chang and Ying, 1996).",
"Various item exposure control methods have been proposed to reduce overuse of good items; see Stocking and Lewis (2000) for a survey of early methods.",
"While some CAT systems use the above 3 aspects separately, the automated test assembly (ATA) approaches use them together in an optimization framework such as linear programming or mixed integer programming, where content balancing criteria are constraints and item selection criteria are objective functions; e.g., Theunis-sen (1986), der Linden and Boekkooi-Timminga (1989), Stocking and Swanson (1993), Swanson and Stocking (1993).",
"For a comparative analysis of such optimization approaches, see der Linden (2005) and Luo (2020).",
"Again, it is difficult to directly compare our work with these optimization-based approaches, because of our static setting in interviews where no answers are available to estimate candidate proficiency (unlike CAT).",
"In most CAT approaches, the information available for each item is rather limited: subject, sub-area, difficulty level, discrimination level etc.",
"We have used knowledge graphs to create a semantically rich and detailed characterization of questions in terms of concepts.",
"Our optimization formulation uses the knowledge graph to generate novel constraints (including content balancing) and objective functions for item selection.",
"CAT algorithms cannot be directly used as baselines, because",
"(i) they output an ordered sequence of questions (we output an unordered set of ques-tions); and (ii) they need candidate answers, which are not available to us.",
"DuerQuiz (Qin et al., 2019) starts from job descriptions and resumes of candidates, and considers the problem of recommending a set of questions using a skill graph.",
"It uses knowledge of resumes of candidates who have been hired in the past.",
"It additionally considers the task of extracting skills from resumes and job descriptions, and construction of the skill graph, which are not our primary focus.",
"For the actual task of question selection for a specific resume, DuerQuiz initializes weights of concepts based on the job description, historical resumes and the focus resume, and then dissipates those weights over descendant concepts in the skill graph.",
"Finally, the weights determine the number of questions selected from a concept.",
"It does not consider the notion of question difficulty, or relations between questions.",
"For concreteness, in this paper we focus on candidates in the IT domain.",
"We start by noting that an interviewer asks different types of questions.",
"Technical questions explore the breadth and depth of the candidate's understanding of a particular technical skill.",
"Other than technical questions, interviewers also ask techno-experience questions (e.g. about skills in projects), methodological questions , behavioural questions , among others.",
"For concreteness, in this paper we focus only on technical questions about skills in the IT domain.",
"We focus on entry-level candidates (freshers or those with less than 1 year experi-ence), because for more experienced candidates the interviewers tend to move quickly to techno-experience questions.",
"We also assume the ques-maximize f 1 : | Q | (cid:88) i =1 x i | ( q i ) | + f 2 : | Q | (cid:88) i =1 | Q | (cid:88) j>i x i x j is _ qgraph _ edge ( q i , q j ) + f 3 : | Q | (cid:88) i =1 | Q | (cid:88) j>i x i x j is _ qgraph _ path ( q i , q j ) + f 4 : | Q | (cid:88) i =1 x i ( ( q i ) ( WR ) (cid:54) = ) + f 5 : | Q | (cid:88) i =1 x i ( ( q i ) ( WJ ) (cid:54) = ) such that C1 : | Q | (cid:88) i =1 x i T ( q i ) ( q i ) T C2(k) : | Q | (cid:88) i =1 x i ( ( q i ) == k ) ( m k ( | Q | (cid:88) i =1 x i )) C5 : | Q | (cid:88) i =1 x i ( q i ) h 0 | Q | (cid:88) i =1 x i Figure 1: The integer programming problem tions are such that they require short answers, typically containing up to 5-6 sentences.",
"Given a candidate resume R , a technical skill s , and a question bank QB s about that skill, the task we address is: how to select the best questions from QB s , which maximize some objective functions and meet the required constraints?",
"The questions need to selected from QB s and need to be highly personalized for the candidate in the sense that they should be closely related to the candidate's background mentioned in R .",
"The complete optimization formulation is given in Fig. 1. We use the term skill to refer to a broad technical area; examples: Python, Machine_Learning, Algorithms, Networking etc.",
"Let s denote a given skill.",
"Let C s (or just C if the skill is clear) be a set of concepts related to a given skill.",
"Relationships such as IS-A, HAS-A (inverse of IS-PART-OF ) hold between pairs of concepts; e.g., array IS-A data_structure and class HAS-A method .",
"We represent the concepts in a particular skill and their inter-relationships as a knowledge graph G = ( C, E, ) , where the vertices are concepts, E is the set of directed edges linking pairs of concepts, and : E REL is the edge labeling function that associates a relationship name with every edge.",
"Fig. 3 shows a small part of a concept graph for the skill Python ; here, each vertex corresponds to a concept, the edges show relationships between concepts and the edge label is shown on each arrow.",
"Neighbourhood of a concept u C , denoted ( u ) , is the set of concepts directly connected to u in the knowledge graph, along with u itself.",
"For simplicity, we ignore the edge direction and edge label when computing ( u ) .",
"Example: (11) = { 4 , 11 , 12 , 15 } .",
"Neighbourhood of a set of concepts B = { u 1 , . . . , u k } is the union of their neighbourhoods: ( B ) = ( u 1 ) . . . ( u k ) .",
"We assume we have a question bank, which is a set of questions about a particular skill, along with some other information with each question.",
"Formally, a question bank for a skill s is QB s = ( Q, , ) , where Q is a set of questions about s , the function ( q ) associates a difficulty level with every question q Q , and the function ( q ) associates a non-empty subset of concepts with every question q Q .",
"3 A difficulty level is 0 (easy), 1 (medium), 2 (hard); a more nuanced scale can be easily incorporated.",
"Fig. 2 shows a small question bank containing 13 questions for the skill Python ; it also shows the difficulty level and the subset of concepts (from the knowledge graph of Fig. 3) associated with each question.",
"In order to prevent the same subset of questions being identified by our solution for very similar resumes, we could either shuffle the questions in Q , or use only a random subset of Q as input.",
"Coverage of a question q , denoted ( q ) , is the set of concepts ( q ) associated with q , along with the concepts at 1-hop from each concept in ( q ) .",
"For simplicity, we ignore the edge direction and edge label when computing ( q ) .",
"Example: ( q 2 ) = { 20 , 21 , 17 , 24 } .",
"Let x i , 1 i | Q | be the set of Boolean variables, where if x i = 1 the question q i is included in a set of questions, and not included if x i = 0 .",
"Then the first term f 1 in our objective function selects a set of questions which has the maximum coverage.",
"Different candidates can take different amounts of time T ( q ) for answering any particular question q in the QB.",
"This time is candidate specific and unknown a priori .",
"We have a simple model 3 A more realistic setting would associate an ordered sequence of concepts with a question, ranked in terms of the decreasing relevance of the concepts to the question.",
"to accommodate this time: a candidate takes time T i ( q ) minutes to answer a question q having difficulty level ( q ) = i ; for concreteness, we assume T 0 ( q ) = 1 , T 1 ( q ) = 2 , T 2 ( q ) = 3 .",
"This simpli-fied model predicts the same time, for all candidates, for all questions having a particular difficulty level; a more nuanced approach would be, for example, to learn the time distribution from data of past interviews.",
"Interviewers often have an informal time-limit ( budget ) T on the time to spend on a particular skill.",
"So we have constraint C1: the total estimated time taken to answer the selected questions must be at most T .",
"In order to prevent selection of only easy questions, we introduce constraints",
"{C2(j)} for j 0 , 1 , 2 that force a more equitable user-specified distribution of difficulty levels in selected questions.",
"The user-specified constants 0 m 0 , m 1 , m 2 1 , m 0 + m 1 + m 2 = 1 give control to the user to generate questions that suit a particular style; e.g., setting m 0 = 0 .",
"2 , m 1 = 0 .",
"2 , m 2 = 0 .",
"6 will tend to select more hard questions.",
"Questions asked in an interview are often related to another question, indicating exploration of the depth of a candidate's knowledge.",
"Given a set of questions A = { q 1 , . . . , q k } , we define a question graph GA , whose vertices are the questions in A and two questions q i , q j have an undirected edge if ( q i ) ( q j ) (cid:54) = .",
"In general, GA may be a disconnected graph.",
"A path of length 1 or more indicates a sequence of inter-related questions.",
"Fig. 4 shows the question graph for the questions in Fig. 2. A path P in a graph is a longest path (or chain ) if P is not a sub-path of any other path in the graph.",
"Now we have another term f 2 in our objective function: maximize the sum of the lengths of all longest paths in GA .",
"Since this is computationally expensive, we can use as an approximation the number of edges in GA , since an edge indicates a sequence of two questions.",
"Note that this term has a quadratic form, which can be easily linearized taking advantage of the fact that the decision variables are all binary (we omit this reformulation).",
"Questions asked in an interview are often unrelated to another question, indicating exploration of the breadth of a candidate's knowledge.",
"We define two questions q i , q j as unrelated if there is no path between them in GA i.e., q i is unreachable from q j and vice versa.",
"Analogously, we define two paths P 1 , P 2 in GA as unrelated if no vertex in P 2 is reachable from any vertex in P 1 and vice versa.",
"Now we have another term f 3 : maximize the number of pairs of paths which are unrelated.",
"Since this is computationally expensive, we can use as an approximation the number of pairs of vertices which are unreachable via paths from each other.",
"Note that this objective also has the quadratic form, which we linearize.",
"An interview often includes questions about concepts related to a given skill which are mentioned in the candidate's resume R .",
"Let WR = { w 1 , w 2 , . . . , w (cid:96) } denote the (cid:96) concepts mentioned in the candidate's resume; e.g., in the descriptions of her jobs, projects, trainings etc.",
"A reasonable interview must include questions related to as many concepts in the neighbourhood of WR as possible, giving us another objective function term f 4 .",
"We can further refine this objective to specifically consider questions directly related to WR , giving us an additional term f d 4 , with WR instead of ( WR ) and ( q ) instead of ( q ) .",
"We could refine this even further, if we could estimate from the resume that the candidate has different proficiency levels in different concepts; e.g., if a candidate has worked in Flash in 1 project of 3 months and in numpy in two projects for 11 months, then she is clearly stronger in numpy than in Flash .",
"Analogously, let WJ denote the set of concepts relevant to a given job description for which the candidate is being interviewed.",
"A reasonable interview must include questions related to as many concepts in the neighborhood WJ as possible, giving us term f 5 .",
"As for f 4 , here as well we consider a direct version f d 5 , with WJ replacing ( WJ ) and ( q ) replacing ( q ) .",
"Suppose we have some idea of the proficiency level ( s ) that a particular candidate has in a given skill s .",
"This estimate could be generated from the information in the resume (projects, tasks, trainings) or from other sources, such as the scores in a prior written test.",
"Suppose the estimated proficiency level in a skill is an integer from 0 (does not know) to 4 (expert).",
"We should take this input into account in order to adjust the difficulty level of selected questions; e.g., a candidate with proficiency level ( s ) = 3 should be asked fairly difficult questions.",
"This gives us constraint C5, which says that the average difficulty level of selected questions should be above a user-specified Figure 2: A small question bank for the skill Python .",
"constant h 0 , which can be derived from the proficiency level ( s ) of the candidate in skill s .",
"We normalize the terms in the objective function so that these take values in [0 , 1] .",
"Further, we take a weighted sum (instead of the plain sum) of the terms: w 1 f 1 + . . . + w 5 f 5 , where w 1 , . . . , w 4 , w d 4 , w 5 , w d 5 are user-given positive real weights.",
"The weights will allow the interviewer to change the relative importance of the terms.",
"Interview Plans for a Set of Candidates: The optimization program discussed so far is useful to generate an interview plan for one particular candidate.",
"However, in many situations (such as campus interviews), there is a sequence of interviews for multiple candidates and the system is required to generate a system plan for each of them.",
"There are additional constraints on the set of interview plans generated in such a situation.",
"For example, the repetition of questions across multiple candidates should be minimized i.e., different candidates (even those having a similar background) should by-and-large get different sets of questions.",
"Let N 0 denote the number of candidates to be interviewed in the current campaign.",
"We assume that a single question bank QB s for skill s (or, just Q for simplicity) will be used to se-Figure 4: Question graph for the example questions.",
"lect questions for each of the N 0 candidates.",
"Let gen _ interview _ plan denote the above optimization program for selecting questions for a given candidate.",
"Output of this program is a Boolean vector sol , where sol ( i ) = 1 if question q i Q is to be included in the interview plan for the given candidate; 0 otherwise.",
"We extend our earlier definition of a question bank by adding another property for each question viz., novelty count ( q i ) , which is the N 0 count ( q i ) , with count ( q i ) being number of times the question q i Q has been used so far.",
"Initially count ( q i ) = 0 and ( q i ) = N 0 for each question in Q .",
"We sequentially call gen _ interview _ plan for each of the N 0 candidates and use the output Boolean vector sol to increment ( q i ) for each question in Q ; see algorithm gen _ interview _ plan _ set .",
"end Algorithm 1: gen _ interview _ plan _ set",
"The optimization program gen _ interview _ plan is same as earlier, except that we have added a new term f 6 to the objective function that maximizes the sum of novelty counts of the questions.",
"Again, we normalize this to ensure that the value is between 0 and 1. Note that the novelty counts are updated for all questions after each call to the optimization program.",
"Integer programming being an NP-Hard problem, exact solvers often take a lot of time.",
"Instead, we resorted to the LP rounding approximation, where we relax the integer constraints x i { 0 , 1 } to x i [0 , 1] and used available LP solvers (CBC solver in python Pulp) to solve the resultant linear programming problem.",
"Then we rounded the x i values to { 0 , 1 } using a threshold.",
"While we do use human experts to compare two sets of questions, here we propose an automated intrinsic evaluation method to judge the quality of the set of questions (e.g., selected by our optimization model, or by any baseline method discussed later) for a particular candidate, by comparing this set with the set of questions actually asked to this candidate in a real interview.",
"Let A i denote the set of actual questions asked to i -th candidate, ignoring the order of asking; we can get A i from interview transcripts.",
"Let Q i be the set of questions recommended by some algorithm for the i -th candidate.",
"In general, | A i | (cid:54) = | Q i | .",
"We assume that both Q i and A i are about the given skill s .",
"Suppose we have a Boolean function is _ qsim (discussed shortly) that returns T RUE if two given questions are highly similar and F ALSE otherwise.",
"In forward evaluation, we compare each question q j Q i with questions in A i and define a forward Boolean score such that b f ( q j ) = 1 is there is at least one question a k A i such that is _ qsim ( q j , a k ) = T RUE and 0 otherwise.",
"The quality of Q i is evaluated based on the number of questions in Q i having score 1. qual f ( Q i ) = 1 | A i | | Q i | (cid:88) j =1 b f ( q j ) (2) Enforcing an additional constraint that a question in A i is matched to at most one question in Q i , ensures that score qual f ( Q i ) is between 0 and 1. In backward evaluation, we compare each question a j A i with questions in Q i and define a backward Boolean score such that b b ( a j ) = 1 if there is at least one question q k Q i such that is q sim ( q k , a j ) = T RUE and 0 otherwise.",
"The quality measure qual b ( A i ) is defined analogously.",
"It remains now to define is _ qsim .",
"Several similarity measures have been designed in the literature for short text similarity.",
"For efficiency, we consider a simple measure: two questions are similar if their coverages share at least a required minimum number of concepts k 0 (which is a user-specified constant): is _ qsim ( q i , q j ) = (cid:40) T if | ( q i ) ( q j ) | k 0 F otherwise (3) For example, questions 12 and 13 in Fig. 2 are similar (i.e., is _ qsim (12 , 13) = T RUE ), assuming k 0 = 3 , since concepts 8, 10, 14 are common in (12) and (13) .",
"While natural language processing does not play a direct role in the optimization formulation for the selection of interview questions, it is crucial for the creation of the prerequisite resources.",
"The first task is the annotation of questions to identify concepts.",
"These concepts are used to determine question coverage and to construct the question graph based on their coverage.",
"We also use these annotations to construct the knowledge graph for skills, as we explain below.",
"There is a rich body of literature in both mention detection and named entity disambiguation (Ferragina and Scaiella, 2012; Ganea and Hofmann, 2017; Sakor et al., 2020).",
"Since we focus on skill-graphs which are largely sub-graphs of DBPedia, we use the publicly available APIs from TAGME (Ferrag-ina and Scaiella, 2012).",
"However, this resulted in two types of errors.",
"Many mentions were annotated with concepts irrelevant for our skills.",
"Secondly, many mentions relevant for our skills were left unannotated.",
"We performed manual curation on the TAGME annotations to correct these two types of errors.",
"The second task is extraction of skills from resumes.",
"We use an information extraction system called RINX (Pawar et al., 2017), which uses gazetteer-based, linguistic patterns based machine learning methods (e.g., CRF, BiLSTM) to extract mentions of various entity types and relations from resumes.",
"For example, RINX extracts mentions of SKILL (e.g., Machine_learning, Python , CONCEPT (e.g., Activation function, Maximum margin ), ROLE (e.g., Developer, DBA, Test_Engineer ), TASK (e.g., Developed vendor master, Performance Tuning ), among other entity types.",
"The extracted skills are again annotated according to concepts by using TAGME, and manually curated to correct errors.",
"The next task is construction of the knowledge graph for different skills, based on the concept annotation of the questions and the extracted resume skills.",
"This problem has not received enough attention in the computational linguistics community.",
"We first identified a subgraph of DBPedia (Faralli et al., 2018) using the question concepts and the resume skill concepts as positive seeds.",
"We then curated this knowledge graph manually to correct any errors.",
"The next task is assigning difficulty levels to questions.",
"This problem has also received very lit-tle attention (Pad, 2017).",
"We use the following simple approach.",
"Use any automatic answer extraction technique to extract an answer text A for q , from a suitable corpus like Wikipedia or a textbook.",
"Let ( A ) be the set of concepts associated with A .",
"The degree d ( u ) of a concept vertex u in the knowledge graph, ignoring edge directions, is a good approximation of the complexity of that concept; a concept is complex, if it is directly related to many other concepts.",
"Thus the sum of the complexities of the individual concepts in the answer to a question is a good measure of the complexity of that question: ( q ) = (cid:80) u ( q ) d ( u ) .",
"We can now write simple rules to assign a difficulty level to each question: if ( q ) c 0 then ( q ) = 0 else if c 0 < ( q ) c 1 then ( q ) = 1 else ( q ) = 2 ( c 0 , c 1 , c 2 are user-specified con-stants).",
"More complex approaches, such as applying machine learning to predict difficulty levels for questions, are possible.",
"Finally, we identify similar questions for our evaluation.",
"The is_qsim() function (Eqn.3) uses overlap between annotated concepts for simplicity.",
"Clearly, there is a need for more sophisticated approaches, for example using paraphrase detection (Galbraith et al., 2017).",
"To compare against out integer programming approach (IP), we use the following baselines for selecting questions for a candidate having resume R",
"BR1 : Select n q questions randomly from QB s , where n q is same as the number of questions in the optimal plan.",
"BR2 : Let FR ( s ) denote the set of concepts related to skill s mentioned in resume R .",
"Select n q questions randomly from QB s , where coverage of each selected question q has at least one concept common with the neighbourhood of the concept set FR ( s ) i.e., ( q ) ( FR ( s )) (cid:54) = .",
"BR3 : same as BR2 , but ensures even distribution of question difficulty levels.",
"DuerQuiz (Qin et al., 2019): discussed in Section 2. Since no implementation is publicly available, we implemented our own version of this baseline.",
"Since we do not use historical resumes or skill graph edge labels in our framework, we adapted DuerQuiz in the best possible way for out setting.",
"We ignore the terms corresponding to historical resumes in the weight assignment to concepts.",
"We further approximate descendants of concepts as their direct neighbors in the skill graph, both for weight initialization and weight propagation.",
"For our MIP formulation, we use the following weights and hyper-parameters: w 1 = 100 , w 2 = 100 , w 3 = 100 , w 4 = 30 , w d 4 = 70 , w 5 = 30 , w d 5 = 70 , m 0 = 0 .",
"3 , m 1 = 0 .",
"4 , m 2 = 0 .",
"3 , h 0 = 0 .",
"9 and T = 45 .",
"These weights were not fine-tuned, aside for T for controlling the number of recommended question.",
"For DuerQuiz, the paper does not recommend any thumb rule for setting hyper-parameters.",
"We set the propagation weight c = 0 .",
"85 by hand-tuning on one resume, and the smoothing weight f = 0 .",
"001 .",
"In this section, we describe our dataset derived from real interviews, and the experiments that we conducted using this dataset.",
"We report comparisons of our proposed approach (IP) against the baselines defined in Section 6 using both actual interview questions and as well as an user-study for evaluation.",
"We leave out BR2 and instead use its stronger version BR3.",
"All experiments were performed on an Ubuntu 18.04.5 LTS machine with 8-core Intel i7-8550U 1.80GHz processors, and 16 GB memory.",
"For IP, generation of questions with 45min time budget ( 25 questions) takes 155 secs on average.",
"Dataset: We constructed a knowledge graph of 714 concepts and 903 edges (avg. degree 2.51) from Machine Learning and Deep Learning.",
"Our question bank consists of 549 questions from these two skills.",
"Each question is annotated with concepts from the knowledge graph (1.18 concepts per question on average).",
"Finally, we use real resumes of 40 candidates (mostly fresh IT graduates) interviewed by our organization over the last year.",
"We identify knowledge graph concepts associated with their resumes (4.7 concepts per resume on average).",
"For 20 of these candidates, we also have the actual questions asked to them during their interviews.",
"Of these, we consider only the questions related to our two topics of interest.",
"The average number of questions per candidate is 5.05.",
"Intrinsic Evaluation on Real Interviews: In our first evaluation, we compared the set of suggested questions with the set of actually asked questions.",
"Fig. 5 shows a comparison of our optimization formulation with the three baselines, using the forward and backward quality measures (Section 4).",
"As seen, our approach is clearly better than all three baselines in both evaluations and for different values of k 0 .",
"The differences are large for backward evaluation.",
"The improvement against BR1 shows the importance of focusing on the resume, rather than randomly selecting questions related to the skill.",
"The improvement against BR3 shows that just focusing on questions related to the resume is not enough.",
"Finally, the improvement against DuerQuiz, which combines aspects of both BR1 and BR3, shows the importance of the additional terms in our objective function.",
"Also, our analysis shows that DuerQuiz is poor at balancing between high-degree and low-degree concepts in the knowledge graph.",
"Depending on the value of its dissipation hyper-parameter ( c ), it either transfers all the weight of high-degree concepts to their neighbors, or does not transfer any weight from low-degree concepts to their neighbors.",
"IP's trade-off using different terms and their corresponding weights works much better.",
"We further note that BR1 and BR3 perform better than DuerQuiz in terms of forward evaluation, which indicates that these generate fewer irrelevant questions.",
"On the other hand, DuerQuiz is better than these baselines in terms of backward evaluation.",
"This indicates that the questions generated by these baselines are more heterogeneous and lack diversity when compared against DuerQuiz to cover all questions asked during a real interview.",
"Note that IP outperforms DuerQuiz in both directions.",
"compares question sets generated using pairs of algorithms by 3 experienced human interviewers E 1 , E 2 , E 3 .",
"We have 3 pairs of algorithms to compare: (IP, BR1), (IP, BR3), (IP, DuerQuiz).",
"Note that to reduce the load on the human evaluators, we did not invest in comparing the baselines with each other.",
"We randomly assign one of these pairs to each of the N = 20 candidates; e.g., 7 candidates got the pair (LP, BR1) and so forth.",
"For each candidate, we generated two sets of questions, for the skill Machine Learning , using the algorithm pair assigned to it.",
"Hiding the algorithm used to generate the question sets, we presented the two sets for the 20 candidates to each of the 3 experts, along with skills extracted from their resumes.",
"For each candidate, each human expert gave a comparative ranking, indicating whether set 1 was better than set 2. We had not suggested any criterion for this comparison; each expert used her own intuition.",
"There were 7 3 = 21 evaluations of (IP, BR1) pair, out of which IP won in 19.",
"Using 2 test with 99.9% confidence, we reject the null hypothesis and accept that IP is better than BR1.",
"Similarly, IP is better than BR3 in 14 out of 21 evaluations ( 2 85% confidence).",
"Unfortunately, IP is better than DuerQuiz in only 6 out of 21 evaluations.",
"However, there was large disagreement among the experts in this case, and discussions showed that the experts' evaluation criteria were considerably simpler than the objective functions used in IP.",
"For example, no expert considered the inter-linking of the questions in her evaluation, nor did they consider duplication of questions across different candidates as undesirable; but these are important factors in IP for choosing questions.",
"In the future, we intend to perform a larger expert study with a more nuanced evaluation which compares specific quality aspects of the question sets.",
"We have proposed an interview assistant system to automatically select an optimal set of technical questions (from a question bank) personalized for a candidate.",
"We formalized the problem of selecting a set of questions from question banks as an integer programming problem, with multiple terms in the objective functions and multiple constraints.",
"We used knowledge graph as background knowledge, and used the candidate's resume to personalize the selection of questions.",
"We proposed a novel intrinsic evaluation to compare a set of suggested questions with actually asked questions in real interviews.",
"We also used expert human interviewers to comparatively evaluate our approach with a set of reasonable baselines.",
"Our comparisons against state-of-the-art and ablated baselines show the usefulness of our proposed approach."
] | [
"abstain",
"objective",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"objective",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"objective",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"method",
"objective"
] |
[
"Pre-trained language models have been recently shown to benefit task-oriented dialogue (TOD) systems.",
"Despite their success, existing methods often formulate this task as a cascaded generation problem which can lead to error accumulation across different sub-tasks and greater data annotation overhead.",
"In this study, we present PPTOD, a unified plug-and-play model for task-oriented dialogue.",
"In addition, we introduce a new dialogue multi-task pre-training strategy that allows the model to learn the primary TOD task completion skills from heterogeneous dialog corpora.",
"We extensively test our model on three benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification.",
"Experimental results show that PPTOD achieves new state of the art on all evaluated tasks in both high-resource and low-resource scenarios.",
"Furthermore, comparisons against previous SOTA methods show that the responses generated by PPTOD are more factually correct and semantically coherent as judged by human annotators.",
"1 1 Introduction Task-oriented dialogue is often decomposed into three sub-tasks: (1) dialogue state tracking (DST) for tracking user's belief state; (2) dialogue policy learning (POL) for deciding which system action to take; (3) natural language generation (NLG) for generating dialogue response (Young et al., 2013).",
"Traditional approaches (Smith and Hipp, 1995; Young et al., 2013) adopt a modularized pipeline that addresses different sub-tasks with distinct dedicated modules.",
"In contrast, recent systems (Wen et al., 2017; Eric et al., 2017; Lei et al., 2018; Shu et al., 2019) integrate all functionalities required to hold a dialogue into neural network models.",
"With the advances in pre-trained language models (PLMs) (Radford et al., 2019; Devlin et al., 2019; Raffel et al., 2020), different systems based on PLMs have been proposed (Hosseini-Asl et al., 2020; Lin et al., 2020; Peng et al., 2021; Liu et al., 2021).",
"Despite their differences, most existing methods formulate task-oriented dialogue as a cascaded generation problem, that is, the model can only solve latter sub-tasks by conditioning on the outputs of previous ones.",
"For instance, to generate the response (NLG), the model must rely on the outputs of previous sub-tasks (i.e., DST and POL).",
"While impressive results are reported (Hosseini-Asl et al., 2020; Peng et al., 2021), we identify three major limitations in the cascaded formulation of their system design.",
"(1) Firstly, as the model solves all sub-tasks in a sequential order, the errors accumulated from previous steps are propagated to latter steps (Li et al., 2017; Liu and Lane, 2018).",
"(2) Secondly, the training data must be annotated for all sub-tasks.",
"Such annotation requirement significantly increases the data curation overhead.",
"More importantly, it precludes the model from using the large amount of existing data that is partially annotated (e.g., data only annotated with DST or NLG).",
"(3) Thirdly, the results of different sub-tasks must be generated in a cascaded order which inevitably increases the system inference latency.",
"In this study, we propose a novel P lug-andP lay T askO riented D ialogue (PPTOD) system.",
"Figure 1 depicts an illustration of our approach.",
"As seen, we integrate different dialogue modules (e.g. DST, POL, and NLG) into a unified model.",
"Motivated by the concept of in-context learning (Brown et al., 2020), to steer the model to solve different TOD sub-task, we plug a task-specific natural language instruction, termed as prompt , into the dialogue context as the model input.",
"This way, the generations of different sub-tasks are decoupled, leading to a greater flexibility of the model that brings us at least two advantages: (1) As different sub-tasks are 4661 Figure 1: Overview : In the dialogue multi-task pre-training stage, we pre-train our model with four TOD-related tasks, including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning (POL), and natural language generation (NLG).",
"(2) The outputs of different sub-tasks are generated in parallel which alleviates the problem of error accumulation and reduces the system inference latency.",
"Inspired by recent success of dialogue language model pre-training (Zhang et al., 2020c; Wu et al., 2020; Peng et al., 2021), we propose a dialogue multi-task pre-training strategy that equips our model with the primary TOD task completion skills.",
"Specifically, initialized with T5 (Raffel et al., 2020), we pre-train our model on a heterogeneous set of dialog corpora that consist of partially-annotated data.",
"To build the pre-training corpora, we collect and combine eleven human-written multi-turn dialogue corpora.",
"The collected datasets are partially annotated for some of the TOD-related tasks, including natural language understanding (NLU), dialogue state tracking (DST), dialogue policy learning (POL), and natural language generation (NLG).",
"In total, the pre-training corpora contain over 2.3M utterances across over 80 domains (see more details in Table 1).",
"When applying the pre-trained PPTOD to a new task, we fine-tune it using the same learning objective as in the pre-training stage.",
"We evaluate PPTOD on a wide range of benchmark TOD tasks, including end-to-end dialogue modelling, dialogue state tracking, and intent classification.",
"Comparisons against previous state-of-the-art approaches show that PPTOD achieves better performance in both full-training and low-resource settings as judged by automatic and human evaluations.",
"In summary, our contributions are: A novel model, PPTOD, that effectively leverages pre-trained language models for task-oriented dialogue tasks.",
"A new dialogue multi-task pre-training strategy that augments the model's ability with heterogeneous dialogue corpora.",
"Extensive evaluations on three benchmark TOD tasks reporting state-of-the-art results in both full-training and low-resource settings.",
"In-depth analysis that further reveals the merits of our model design and the proposed multi-task pre-training strategy.",
"Task-Oriented Dialogue.",
"Task-oriented dialogue aims at accomplishing user's goal.",
"Traditional systems (Williams and Young, 2007; Young et al., 2013) adopt a pipelined approach that requires dialogue state tracking for understanding user's goal, dialogue policy learning for deciding which system action to take, and natural language generation for generating dialogue responses.",
"Recently, to simplify the modelling effort, researchers have shifted their attention to building neural network models that address the TOD subtasks (Wen et al., 2017; Eric et al., 2017; Lei et al., 2018; Liang et al., 2020).",
"With the advances in pre-trained language models (PLMs), Budzianowski and Vulic (2019) first applied the GPT-2 model for the NLG task.",
"Lin et al. (2020) and Yang et al. (2021) moved one step forward and utilized pre-trained language models to solve all TOD sub-tasks 4662 conditioned on the history of oracle belief states.",
"Based on the GPT-2 model, Hosseini-Asl et al. (2020) proposed a cascaded model, SimpleTOD, that addresses all TOD sub-tasks without using the oracle information.",
"To improve the system performance, Peng et al. (2021) and Liu et al. (2021) applied dialogue pre-training over external dialogue corpora.",
"However, both methods require the pretraining data to be fully annotated for all TOD sub-tasks (i.e., DST, POL, and NLG) which greatly limits the amount of data they can use.",
"Additionally, Liu et al. (2021) achieved better results with noisy chanel model that requires two additional language models for outputs re-scoring.",
"Unlike their approach, we address the task of task-oriented dialogue with a single unified model.",
"Lastly, concurrent work by He et al. (2021) shows that adding an unified dialogue act prediction task for policy optimization helps to improve the performance of the pre-trained task-oriented dialogue model.",
"Language Model Pre-training.",
"The research community has witnessed remarkable progress of pre-training methods in a wide range of NLP tasks, including language understanding (Peters et al., 2018; Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Su et al., 2021a) and text generation (Radford et al., 2019; Lewis et al., 2020; Raffel et al., 2020; Su et al., 2021d,c,b, 2022).",
"In the dialogue domain, many models are pre-trained on open-domain conversational data like Reddit.",
"Based on GPT-2, Transfertransfo (Wolf et al., 2019b) achieves good results on ConvAI-2 competition.",
"As another extension of GPT-2, DialoGPT (Zhang et al., 2020c) performs well in generating open-domain dialogue response.",
"ConveRT (Henderson et al., 2020) is a language model with dual-encoder built for the task of response selection.",
"PLATO (Bao et al., 2020) pre-trains a model with discrete latent variable structure for the response generation task.",
"Wu et al. (2020) adapts BERT with TOD pre-training and achieves strong performances on four dialogue understanding tasks.",
"Pre-training on Supplementary Data.",
"Recent work (Phang et al., 2018; Aghajanyan et al., 2021) found that supplementary training on the tasks with intermediate-labelled data improves the performance of the fine-tuned models on GLUE natural language understanding benchmark (Wang et al., 2018).",
"Our work studies a similar supplementary training setup with intermediate-labelled data for Dataset Data Annotation Utter.",
"task-oriented dialogue systems.",
"Unlike previous work, we use a single multi-task model for all relevant sub-tasks in task-oriented dialogue systems.",
"In this section, we first discuss the datasets and learning objective used in the proposed dialogue multi-task pre-training.",
"Then we introduce how to apply the pre-trained PPTOD for a new task.",
"To construct the pre-training corpus, we collect eleven human-written multi-turn task-oriented dialogue corpora, including MetaLWOZ (Lee et al., 2019b), SNIPS (Coucke et al., 2018), CLINC (Lar-son et al., 2019), ATIS (Amin, 2019), KVRET (Eric et al., 2017), WOZ (Mrkic et al., 2017), MSR-E2E (Li et al., 2018), Frames (El Asri et al., 2017), TaskMaster (Byrne et al., 2019), and Schema-Guided (Rastogi et al., 2020).",
"In total, there are over 2.3M utterances across 80 domains.",
"In Table 1, we provide the details of data annotations and utterance/domain statistics of all datasets.",
"2 3.2 Dialogue Multi-Task Pre-training Motivated by previous work (McCann et al., 2018; Keskar et al., 2019; Raffel et al., 2020) that unify multiple NLP tasks into a common format, we cast all TOD-related tasks that we consider into the same plug-and-play text generation problem.",
"To specify the target task, we plug a task-specific 2 More dataset descriptions are provided in Appendix A. 4663 Algorithm 1: Dialogue Multi-Task Pre-Training Input : Dataset D = { ( z t , x, y ) i } |D| i =1 ; model trainer T that takes batches of training data as input to optimize the model parameters ; maximum number of epochs e max ; 1 for epoch e = 1 , ..., e max do 2 Shuffle D by mixing data from different tasks; for B in D do 3 Invoke trainer T , using one batch of training data B = { ( z t , x, y ) k } | B | k =1 as input to optimize the model using L (Eq.",
"where t denotes the TOD task that the sample d belongs to, and t { NLU , DST , POL , NLG } .",
"z t is the task-specific prompt of the form translate dialogue to A: , with A corresponding to user intent, belief state, dialogue act, and system response for the tasks of NLU, DST, POL, and NLG, respectively.",
"x denotes the input dialogue context which is a concatenation of all previous utterances in the dialogue both system's and user's.",
"And y denotes the target output text.",
"As an example presented in Figure 1, to perform the user intent classification task (i.e., NLU), the model is fed with the sequence translate dialogue to user intent: [user] Tell me the weather forecast for Lecanto, Georgia. and is trained to generate the user intent label text [get_weather] .",
"Learning.",
"The model is trained with a maximum likelihood objective.",
"Given the training sample d = ( z t , x, y ) , the objective L is defined as L = | y | (cid:88) i =1 log P ( y i | y <i ; z t , x ) , (2) where is the model parameters.",
"In the multi-task pre-training stage, the model is trained to perform all TOD-related tasks with data annotated for different tasks.",
"To optimize the model parameters , we use mini-batch based optimization approach as shown in Algorithm 1.",
"When applying the pre-trained PPTOD to a new downstream task with task-specific labelled data,",
"In this work, we report results of PPTOD with three model sizes: PPTOD small , PPTOD base , and PPTOD large .",
"These three models are initialized with T5-small, T5-base, and T5-large models (Raf-fel et al., 2020) that contain 60M, 220M, and 770M parameters, respectively.",
"We pre-train the model with different configurations on our collected pre-training corpora for 10 epochs.",
"The training samples are truncated to ensure a maximal length of 1024.",
"The models are trained using Adam optimizer (Kingma and Ba, 2015) with a learning rate of 5e-5 and a batch size of 128.",
"Our implementation is based on the Huggingface Library (Wolf et al., 2019a).",
"We test PPTOD on three benchmark TOD tasks: (1) end-to-end dialogue modelling; (2) dialogue state tracking; and (3) user intent classification.",
"End-to-end dialogue modelling aims at evaluating the model in the most realistic, fully end-to-end setting, where the generated dialogue states are used for the database search and response generation (Zhang et al., 2020b; Hosseini-Asl et al., 2020).",
"We conduct experiments on the benchmark MultiWOZ 2.0 (Budzianowski et al., 2018) and 2.1 (Eric et al., 2020) datasets.",
"3 In MultiWOZ, the generation of response is not only related to the dialogue context, but also grounded on the database (DB) state.",
"The DB state is automatically retrieved from a pre-defined database using the generated dialogue state (DST).",
"Following previous studies, during inference, PPTOD first predicts the DST result to retrieve the DB state.",
"Then, based on the retrieved DB state and the dialogue context, the results of POL and NLG are generated in parallel.",
"In Section 5, we further compare the performance of our model with or without using the DB state as input.",
"For evaluation, we follow the original MultiWOZ guidance for all individual metrics: Inform , Success , and BLEU (Papineni et al., 2002).",
"An 3 Note that, there is no overlap between the MultiWOZ dataset and our dialogue pre-training corpora.",
"overall measurement, i.e., combined score (Mehri et al., 2019), is also reported which is defined as Combined = (Inform + Success) 0.5 + BLEU.",
"We compare PPTOD with several strong baselines, including Sequicity (Lei et al., 2018), MD-Sequicity (Zhang et al., 2020b), DAMD (Zhang et al., 2020b), MinTL (Lin et al., 2020), HIER-Joint (Santra et al., 2021), LABES-S2S (Zhang et al., 2020a), SimpleTOD (Hosseini-Asl et al., 2020), UBAR (Yang et al., 2021), and SOLOIST (Peng et al., 2021), TOP and TOP+Noisy Online Decoding (TOP+NOD) (Liu et al., 2021).",
"Table 2 shows the main results.",
"On both MultiWOZ 2.0 and 2.1 datasets, PPTOD performs better than previous SOTA methods on seven out of eight metrics.",
"In particular, it is worth mentioning that our model is a single architecture that does not require additional language models for re-ranking the outputs as in TOP+NOD (Liu et al., 2021).",
"Moreover, the results show that the large size PPTOD large underperforms PPTOD small and PPTOD base .",
"Our analysis is that the large size model is less capable when learning to generate the delexicalized tokens, which are not seen during its pre-training stage, for the NLG task.",
"To investigate the generalization ability of PPTOD, we evaluate it in a more challenging low-resource scenario.",
"Following previous studies, we train our model on MultiWOZ 2.0 by varying the percentage of training data, ranging from 1% ( 80 samples) to 20% ( 1600 samples).",
"We compare our model with several strong baselines, including MD-Sequicity, DAMD, SOLOIST, and MinTL.",
"4 In each low-resource setting, we train our model five times with different random seeds and different selection of training data.",
"The average scores over five runs are presented in Table 3.",
"5 As seen, PPTOD consistently outperforms all baseline models by a large margin.",
"Notably, our performance gain is even larger when fewer samples are used for training.",
"This indicates that PPTOD better leverages 4 We did not compare results with TOP+NOD (Liu et al., 2021) since the authors did not release their code and models.",
"the prior knowledge from pre-training therefore achieving better results in the extreme low-resource settings.",
"Furthermore, with 20% of training data, PPTOD can achieve results that are comparable to the scores of systems like SOLOIST that are trained with full dataset as reported in Table 2.",
"Next, we evaluate PPTOD for the dialogue state tracking task.",
"The experiments are conducted on the benchmark MultiWOZ 2.0 (Budzianowski et al., 2018) and 2.1 (Eric et al., 2020) datasets.",
"For evaluation, the joint goal accuracy is reported.",
"We compare PPTOD with a wide range of existing methods that can be categorized into two classes: (1) classification-based approaches and (2) generation-based approaches.",
"Table 4 shows the DST results.",
"Compared to other generation-based approaches, PPTOD large obtains the highest accuracy on both datasets.",
"The performance of our model is lower than the SOTA classification-based approaches.",
"However, these methods operate on a fixed ontology and perform prediction over a pre-defined set of slot-value pairs (Zhang et al., Model Training Size (%) 1 5 10 20 SimpleTOD 7.91 1.07 16.14 1.48 22.37 1.17 31.22 2.32 MinTL 9.25 2.33 21.28 1.94 30.32 2.14 35.96 1.25 SOLOIST 13.21 1.97 26.53 1.62 32.42 1.13 38.68 0.98 PPTOD small 27.85 0.77 39.07 0.85 42.36 0.29 45.98 0.38 PPTOD base 29.72 0.61 40.20 0.39 43.45 0.64 46.96 0.40 PPTOD large 31.46 0.41 43.61 0.42 45.96 0.66 48.95 0.13 Table 5: Low-resource DST Evaluation: The means and standard deviations over five runs are reported.",
"2019; Chen et al., 2020; Shan et al., 2020; Zhou et al., 2021).",
"This idea of fixed ontology is not scalable, as in real world applications, the ontology is subject to constant change (Heck et al., 2020).",
"In contrast, PPTOD directly generates the outputs, making it more adaptive and generalizable to new ontology labels in real world applications.",
"To investigate how well PPTOD performs with limited training samples on the downstream task, we evaluate it in a simulated low-resource setting.",
"Specifically, we train the model on MultiWOZ 2.0 by varying the percentage of training data (i.e., 1%, 5%, 10%, and 20%).",
"We compare PPTOD with three strong generation-based baselines, including SimpleTOD, MinTL, and SOLOIST, using the offi-cial code released by the authors.",
"Table 5 shows the experimental results.",
"As seen, in all settings, PPTOD outperforms other baselines by a large margin.",
"In the extreme scenario, with only 1% of training data, PPTOD surpasses the strongest SOLOIST model by 18 points of accuracy.",
"This demonstrates that our model is more generalizable and can be better applied to new tasks where the amount of training data is limited.",
"The goal of intent classification, i.e. NLU, is to classify the user's intent based on the user's utterance.",
"We conduct experiments on the benchmark Banking77 dataset (Casanueva et al., 2020) that contains data with 77 different intents.",
"Following previous studies (Casanueva et al., 2020; Peng et al., 2021), we test our model in both full training and low-resource settings.",
"In the low-resource setting, we vary the number of training samples per intent from 10 to 30.",
"The standard classification accuracy is reported for evaluation.",
"We compare PPTOD with several strong baselines, including BERT-Fixed, BERT-Tuned, USE+ConveRT (Casanueva et al., 2020), USE 4666 Model Generation Mode DB End-to-End Dialogue Modelling Inference Measurement Inform Success BLEU Combined Score Latency (ms) Speedup SOLOIST Cascaded (cid:88) 85.50 72.90 16.54 95.74 208.69 1.00 MinTL Cascaded (cid:88) 84.88 74.91 17.89 97.78 78.82 2.65 T5-small Cascaded 83.60 71.20 18.09 95.49 38.70 5.39 (cid:88) 84.10 73.70 18.03 96.93 39.78 5.25 Plug-and-Play 84.70 72.80 18.52 97.27 14.17 14.73 (cid:88) 85.10 75.10 17.82 97.92 19.52 10.69 Table 6: Comparison between plug-and-play and cascaded generation.",
"(Yang et al., 2020), ConveRT (Henderson et al., 2020), and SOLOIST (Peng et al., 2021).",
"It is worth mentioning that all compared baselines are classification-based approach that uses a classifier with a softmax layer to make the prediction over the pre-defined intent set.",
"In contrast, as described in section 3.2, PPTOD solves the classification task as a generation problem by directly generating the text of intent label.",
"Therefore, when adapting to a new classification task, PPTOD is more flexible and no extra model parameters are required.",
"In the experiments, we train PPTOD for five runs with different selection of training data and random seeds.",
"The average scores and standard deviations are reported in Table 7.",
"We see that PPTOD is comparable with existing methods.",
"On low-resource-30 and full training settings, PPTOD large achieves the best results.",
"Our performance gains are even more remarkable given that PPTOD requires no extra parameters when solving the classification task.",
"In this section, we present further discussions and empirical analyses of the proposed model.",
"First, we compare our plug-and-play generation with the cascaded generation that is adopted by",
"most existing studies.",
"To this end, we fine-tune a T5-small model (without dialogue multi-task pretraining) on MultiWOZ 2.0 by either using the plug-and-play or the cascaded formulation.",
"Moreover, we also examine the effect of DB state on the model performance.",
"Specifically, for the plug-and-play model, when utilizing DB state, it first predicts the dialogue state (DST) to retrieve the DB state from the pre-defined database.",
"Then, based on the DB state and dialogue context, the output of POL and NLG are generated in parallel.",
"When ignoring the DB state, the plug-and-play model generates DST, POL, and NLG results in a fully paralleled fashion.",
"For evaluation, we report the results on end-to-end dialogue modelling task.",
"In addition, we report the average inference latency and relative speedup of each model.",
"6 We compare our ablated models with two strong baselines, SOLOIST and MinTL.",
"7 Table 6 presents the results.",
"As seen, the plug-and-play models yield better results than their cascaded counterparts.",
"One reason is that, for cascaded models, the previously generated results are explicitly used as model input for latter sub-tasks, which leads to error accumulation.",
"Moreover, we see that using DB state generally improves the model performance for both plug-and-play and cascaded models as it provides the model with more grounding information.",
"Furthermore, with DB state, our plug-and-play model achieves better overall score than MinTL with an around 4 speedup.",
"This suggests that the plug-and-play formulation benefits the model both in terms of the generation accuracy as well as the inference latency.",
"Next, we provide further analyses on the dialogue multi-task pre-training strategy.",
"To quantify the importance of different pre-training data, we pre-train 6 The latency of each model is measured on a single Nvidia V100 GPU with a batch size of 1.",
"the T5-small model using data that is annotated for individual TOD-related task (i.e., NLU, DST, POL, and NLG).",
"After pre-training, we then evaluate the models on three downstream TOD tasks using MultiWOZ 2.0 and Banking77 datasets.",
"For end-to-end dialogue modelling and dialogue state tracking, we test the model in both 1% and full training settings.",
"For intent classification, we measure the accuracy of models trained with either 10 training samples per intent or full training samples.",
"Table 8 presents the results with the first row showing the performance of vanilla T5-small model.",
"As seen, without any pre-training, the vanilla T5-small model performs poorly in the low-resource setting of all evaluated tasks.",
"This suggests that the prior knowledge from pre-training is indispensable for the model to achieve strong performances in the low-resource scenarios.",
"Moreover, we see that pre-training with data annotated for individual TOD-related task helps the model to attain better result in the corresponding downstream task.",
"For example, pre-training with DST data notably improves the model performance in the downstream DST task both in low-resource and full-training settings.",
"Similarly, pre-training with NLG data helps the model to get better BLEU score in the end-to-end dialogue modelling task.",
"Lastly, we see that the PPTOD small model attains the best results on most of the evaluation metrics.",
"This suggests that the pre-training data with different annotations are compatible with each other and the joint utilization of all pre-training data helps the model to achieve the best overall performance.",
"We also conduct a human evaluation with the help of graders proficient in English using an internal evaluation platform.",
"For evaluation, we randomly selected 50 dialogue sessions from the test set of MultiWOZ 2.0 dataset.",
"We compare the results Understanding Truthfulness Coherency Fluency Agreement 0.641 0.598 0.668 0.806 Reference 1.92 2.00 1.93 1.98 SOLOIST 1.78 1.29 1.64 1.97 PPTOD 1.86 1.51 1.83 1.99 Table 9: Human Evaluation Results generated by the PPTOD base model against the results from the SOLOIST model.",
"All generated results, plus the reference, are evaluated by five graders on a 3-point Likert scale (0, 1, or",
"2) for each of the following features 8 : Understanding : Whether the system correctly understands the user's goal.",
"Truthfulness : Whether the system's response is factually supported by the reference.",
"9 Coherency : Whether the system's response is semantically coherent with the context.",
"Fluency : Whether the system's response is grammatically fluent and easy to understand.",
"Table 9 lists the results, with the first row showing strong inter-annotator agreements as measured by Fleiss (cid:48) kappa coefficient (Fleiss et al., 1971).",
"Comparing with SOLOIST, our model achieves better scores on all metrics.",
"Moreover, on the truthfulness and coherency metrics, our model significantly outperforms SOLOIST as judged by Sign Test (p-value < 0.05), suggesting that PPTOD generates more factually correct and semantically coherent responses.",
"Finally, we note that on the fluency metric, both systems perform comparably with the reference (p-value > 0.4).",
"This shows that the fluency of such systems is largely guaranteed by the prior syntactic knowledge from pre-trained language models, which suggests that future research should focus more on the other aspects of dialog systems.",
"In this paper, we propose PPTOD, a unified model that supports both task-oriented dialogue understanding and response generation in a plug-and-play manner.",
"In addition, we introduce a new dialogue multi-task pre-training strategy to further augment our model's ability in completing TOD-related tasks.",
"Extensive experiments and analysis are conducted on three benchmark TOD tasks in both high-resource and low-resource settings.",
"The automatic and human evaluations demonstrate that PPTOD outperforms the current SOTA systems in terms of various evaluation metrics.",
"The authors would like to thank Anna Currey, David Vandyke, and Dingmin Wang for their insightful discussions and support.",
"Many thanks to our anonymous reviewers and area chairs for their suggestions and comments.",
"We honor and support the ACL code of Ethics.",
"Task-oriented dialogue systems aim to interact and assist the users to fulfill their goals.",
"The interaction and assistance process do not involve any bias towards to the participants.",
"All datasets used in this work are from previously published works, and in our view, do not have any attached privacy or ethical issues."
] | [
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"Geometry problem solving has attracted much attention in the NLP community recently.",
"The task is challenging as it requires abstract problem understanding and symbolic reasoning with axiomatic knowledge.",
"However, current datasets are either small in scale or not publicly available.",
"Thus, we construct a new large-scale benchmark, Geometry3K, consisting of 3,002 geometry problems with dense annotation in formal language.",
"We further propose a novel geometry solving approach with formal language and symbolic reasoning, called Interpretable Geometry Problem Solver (Inter-GPS).",
"Inter-GPS first parses the problem text and diagram into formal language automatically via rule-based text parsing and neural object detecting, respectively.",
"Unlike implicit learning in existing methods, Inter-GPS incorporates theorem knowledge as conditional rules and performs symbolic reasoning step by step.",
"Also, a theorem predictor is designed to infer the theorem application sequence fed to the symbolic solver for the more efficient and reasonable searching path.",
"Extensive experiments on the Geometry3K and GEOS datasets demonstrate that Inter-GPS achieves signifi-cant improvements over existing methods.",
"1 1 Introduction Geometry problem solving is a long-standing challenging task in artificial intelligence and has been gaining more attention in the NLP community recently (Seo et al., 2014; Hopkins et al., 2019; Sachan et al., 2020).",
"Solving geometry problems is an essential subject in high-school education for the development of students' abstract thinking.",
"As an example shown in Figure 1, given problem text Equal contribution.",
"in natural language and a corresponding diagram, one needs to identify the geometric relations, apply theorem knowledge, and conduct algebraic calculations to derive the numerical value of the answer.",
"Psychologists and educators believe that solving geometric problems requires high-level thinking abilities of symbolic abstraction and logical reasoning (Chinnappan, 1998; Nur and Nurvitasari, 2017).",
"However, if algorithms take the raw problem content, it might encounter challenges to understand the abstract semantics and perform human-like cognitive reasoning for inferring the answer in the geometry domain.",
"A formal language is composed of words from a well-formed alphabet based on a spe-cific set of rules and is commonly used in the fields of linguistics and mathematics.",
"Therefore, our proposed geometry solver parses the problem inputs into formal language descriptions (see examples in Figure 1) before solving the problems.",
"To translate the problem text and diagrams to formal descriptions, existing methods (Seo et al., 2015; Sachan et al., 2017; Sachan and Xing, 2017) highly depend on human annotations like symbols in diagrams as the intermediate results.",
"Also, these methods fail to provide the explicit reasoning processes when predicting the answer.",
"For example, (Seo et al., 2015) simplifies the problem solving task to an optimization problem to pick one that satisfies all constraints from choice candidates.",
"Furthermore, most current datasets are either small in scale or not publicly available (Seo et al., 2015; Sachan and Xing, 2017), which further hinders the research of geometry problem solving.",
"To overcome these challenges, we first construct a new large-scale benchmark, called Geometry3K, to assess algorithms' performance of geometry problem solving.",
"The Geometry3K dataset consists of 3,002 multi-choice problems as well as covers diverse geometric shapes and problem goals.",
"In contrast with existing work, we also annotate each problem text and diagram with unified structural descriptions in formal language.",
"This paper further presents a novel geometry solving approach with formal language and symbolic reasoning, called Interpretable Geometry Problem Solver (Inter-GPS).",
"Inter-GPS (Figure 4) develops an automatic parser that translates the problem text via template rules and parses diagrams by a neural object detector into formal language, respectively.",
"In contrast to parameter learning, Inter-GPS formulates the geometry solving task as problem goal searching, and incorporates theorem knowledge as conditional rules to perform symbolic reasoning step by step.",
"It demonstrates an interpretable way to tackle the task.",
"Also, we design a theorem predictor to infer the possible theorem application sequence in Inter-GPS for the efficient and reasonable searching path.",
"Extensive experiments on the Geometry3K and GEOS datasets show Inter-GPS achieves large improvements over existing methods.",
"Our contributions are three-fold: (1) we introduce a large-scale diverse benchmark of geometry problem solving, Geometry3K, which is densely annotated with formal language; (2) we develop an automatic problem parser to translate the problem text and diagram into formal language; (3) we propose a novel interpretable problem solver that applies symbolic reasoning to infer the answer.",
"Datasets for Geometry Problem Solving.",
"Several datasets for geometry problems have been released in recent years.",
"These include GEOS (Seo et al., 2015), GEOS++ (Sachan et al., 2017), GeoShader (Alvin et al., 2017) and GEOS-OS (Sachan and Xing, 2017) datasets.",
"However, these datasets are relatively small in scale and contain limited problem types.",
"For example, there are only 102 shaded area problems in GeoShader and 186 problems in GEOS.",
"While GEOS++ and GEOS-OS contain more data of 1,406 and 2,235 problems, respectively, they have not been publicly available yet.",
"Instead, our Geometry3K dataset features 3,002 SAT-style problems collected from two high-school textbooks that cover diverse graph and goal types.",
"Besides, each problem in Geometry3K is annotated with dense descriptions in formal language (defined in Section 3), which makes it particularly suited for symbolic reasoning and interpretable problem solving.",
"In order to promote follow-up work in the geometry domain, we release the dataset and evaluation baselines.",
"Approaches for Geometry Problem Solving.",
"Due to the sparsity of appropriate data, most early works on automated geometry systems focus on geometry theorem proving (Wen-Tsun, 1986; Chou et al., 1996; Yu et al., 2019; Gan et al., 2019), problem synthesis (Alvin et al., 2014), diagram parsing (Seo et al., 2014), as well as problem formalization (Gan and Yu, 2018).",
"(Seo et al., 2015) attempt using computer vision and natural language processing techniques to solve geometry problems with problem understanding.",
"However, the system does not perform explicit reasoning with axiomatic knowledge as it reduces the task to an optimization problem to see which choice can satisfy all constraints.",
"Some recent efforts (Sachan et al., 2017, 2020) have been made to incorporate theorem knowledge into problem solving.",
"They feed geometry axioms written as horn clause rules and declarations from the diagram and text parser into logical programs in prolog style to solve the problem.",
"However, these methods fail to provide human-readable solving steps.",
"And parameter learning on horn clause rules and built-in solvers leads to an uncontrollable search process.",
"In contrast, our proposed Inter-GPS implements explicit symbolic reasoning to infer the answer without the help of candidate answers in an interpretable way.",
"Interpretable Math Problem Solving.",
"Due to the intrinsic requirements of symbolic understanding and logical reasoning, interpretability of solvers plays an essential role in geometry problem solving.",
"While the interpretability of geometry problem solvers is rarely explored, some pioneering work has been proposed in the general math problem solving domain.",
"Broadly there are two main lines of achieving interpretable solving steps for math problems.",
"The first generates intermediate structural results of equation templates (Huang et al., Problem Text Diagram Choices Text Literals Diagram Literals Find y.",
"2017; Wang et al., 2019), operational programs (Amini et al., 2019) and expression trees (Wang et al., 2018; Qin et al., 2020; Hong et al., 2021).",
"The second line of work with a higher level of interpretability translates the math problems into symbolic language and conducts logical reasoning iteratively to predict the final results (Matsuzaki et al., 2017; Roy and Roth, 2018).",
"Furthermore, inspired by work on semantic parsing (Han and Zhu, 2005; Zhu and Mumford, 2006; Tu et al., 2014), we claim structured diagram parsing and joint semantic representations for text and diagrams is critical in interpretable geometry problem solving.",
"A geometry problem P is defined as a tuple ( t, d, c ) , in which t is the input text, d is the diagram image and c = { c 1 , c 2 , c 3 , c 4 } is the multiple-choice candidate set in the format of numerical values.",
"Given the text t and diagram d , an algorithm is required to predict the correct answer c i c .",
"We formally describe the problem in the geometric domain language , a set of literals composed of predicates and arguments.",
"Basic terms used in the geometry problem solver are defined as follows.",
"entity, geometric relation, or arithmetic function.",
"Definition",
"2. A literal is an application of one predicate to a set of arguments like variables or constants.",
"A set of literals makes up the semantic description from the problem text and diagrams in the formal language space .",
"Definition",
"3. A primitive is a basic geometric element like a point, a line segment, a circle, or an arc segment extracted from the diagram.",
"Table 1 lists examples of predicates and literal templates.",
"There are 91 predicates in our defined formal language, and we list them in the Tables 10 to 15 in the Appendix Section.",
"4.1 Dataset Collection Most existing datasets for geometry problem solving are relatively small, contain limited problem types, or not publicly available.",
"For instance, the GEOS dataset (Seo et al., 2015) only contains 186 SAT problems.",
"Although there are 1,406 problems in GEOS++ (Sachan et al., 2017), this dataset has not been released to the public yet.",
"Therefore, we build a new large-scale geometry problem benchmark, called Geometry3K.",
"The data is collected from two popular textbooks for high school students across grades 6-12 by two online digital libraries (McGraw-Hill 2 , Geometryonline 3 ).",
"Groups of well-trained annotators with undergraduate degrees manually collect each problem with its problem text, geometry diagram, four candidate choices, and correct answer.",
"In order to evaluate the fine-grained performance of geometry solvers, we label each problem data with the corresponding problem goal and geometry shapes.",
"2 https://www.mheducation.com/ 3 www.geometryonline.com Dataset #qa #word #shape #goal #var grade operator type GeoShader (Alvin et al., 2017) 102 / 4 1 1 6-10 { + , , , , (cid:50) 2 , (cid:50) } GEOS (Seo et al., 2015) 186 4,343 4 3 1 6-10 { + , , , , (cid:50) 2 , (cid:50) } GEOS++ (Sachan et al., 2017) 1,406 / 4 3 1 6-10 { + , , , , (cid:50) 2 , (cid:50) } GEOS-OS (Sachan and Xing, 2017) 2,235 / 4 3 1 6-10 { + , , , , (cid:50) 2 , (cid:50) } Geometry3K (ours) 3,002 36,736 6 4 3 6-12 { + , , , , (cid:50) 2 , (cid:50) , sin , cos , tan } Table 2: Comparison of our Geometry3K dataset with existing datasets.",
"Unlike existing datasets that only collect the problem text and diagrams, we further annotate each data in Geometry3K with dense formal language descriptions that bridge the semantic gap between the textual and visual contents as well as benefit the symbolic problem solver.",
"The annotated formal language is used to train and evaluate our proposed problem parsers.",
"Data examples are illustrated in Figure",
"2. 4.2 Dataset Statistics The Geometry3K dataset consists of 3,002 problems and is divided into the train, validation, and test sets with the ratio of 0.7:0.1:0.2, as shown in Table",
"3. Figure 3 illustrates the question distribution by the number of sentence words.",
"The long tail in the distribution requires the geometry solvers to understand the rich semantics in the textual content.",
"There are 6,293 literals for the problem text and 27,213 literals for the diagrams in Geometry3K, respectively.",
"We list the most and least frequent predicates with a frequency greater than 5 in Table",
"4. It is shown that the predicates for the problem Predicates (Text) % Predicates (Diagram) % Find 19.00 Line 30.89 Line 14.49 PointLiesOnLine 16.66 Equals 11.83 Equals 15.17 LengthOf 9.53 MeasureOf 10.46 MeasureOf 8.97 LengthOf 8.69 ......",
"text are more evenly distributed than those for diagrams.",
"This is mainly because the problem text describes diverse geometric shapes, attributes, and relations while diagrams display the basic properties of points, lines, and arcs.",
"To the best of our knowledge, currently, it is the largest geometry problem dataset.",
"We summarize the Geometry3K dataset's main statistics and a comparison of existing datasets in Table",
"2. In addition to four elementary shapes (lines, triangles, regular quadrilaterals, and circles) mentioned in that GEOS dataset, Geometry3K contains irregular quadrilaterals and other polygons.",
"Besides, in Geometry3K, there are more unknown variables and operator types that may require equation solving to find the goal of the problem.",
"Note that 80.5% of problems are solvable without the associated diagram in the GEOS dataset.",
"By contrast, less than 1% of the problems in our Geometry3K dataset could be solved when the problem diagram is not provided.",
"In general, the statistics and comparisons above show Geometry3K is challenging for geometry problem solvers.",
"As an intellectual task, it is necessary to know the human performance for geometry problems.",
"We push the test-split data of the dataset in the crowd-sourcing platform, Amazon Mechanical Turk 4 .",
"Each eligible annotator must have obtained a high school or higher degree and is asked to answer 10 problems in 25 minutes.",
"To ensure annotators solving the problem to the best of their ability, they are further asked to spend at least 7 minutes on the problem set and 10 seconds on each problem.",
"We filter out annotators who do not satisfy the requirement.",
"We also ask dozens of graduates majoring in science or engineering to answer these problems to evaluate human experts' performance.",
"Table 5 shows the human performance.",
"Compared to random guess's accuracy of 25%, humans achieve an overall accuracy of 56.9%, and human experts can achieve a good performance of 90.9%.",
"Our proposed Inter-GPS takes the problem text and diagrams as inputs and translates them into formal language descriptions automatically via the text parser (Section 5.1) and the diagram parser (Section 5.2), respectively.",
"Given the word sequence of the problem text T , the text parser needs to translate it into a set of literals L t , a sequence composed of predicates and variables.",
"Recently, deep neural networks have achieved promising performances in sequence-to-sequence (Seq2Seq) learning tasks like machine translation (Sutskever et al., 2014; Vaswani et al., 2017; Devlin et al., 2018).",
"However, semantic parsers using Seq2Seq learning methods are not feasible to generate satisfactory literals in the Geometry3K dataset for two reasons.",
"Firstly, the limited scale of geometry datasets weakens these highly data-driven methods.",
"Secondly, neural semantic parsers tend to bring noises in generated results while geometry solvers with symbolic reasoning are sensitive to such deviations.",
"Inspired by previous works (Koo et al., 2008; Seo et al., 2015; Bansal et al., 2014) that indicate the rule-based parsing method is able to obtain precise parsing results, we apply this approach with regular expressions to perform text parsing.",
"We also achieve a semantic text parser using BART (Lewis et al., 2020), one of the state-of-the-art sequence learning models for comparison.",
"Diagrams provide complementary geometric information that is not mentioned in the problem text.",
"Previous works (Seo et al., 2014, 2015) require manual annotations to identify symbols in the diagrams and fail to deal with special relational symbols such as parallel , perpendicular , and isosceles .",
"Instead, an automatic diagram parser without human intervention is proposed in this work and is able to detect varied diagram symbols.",
"The diagram parser first applies Hough Transformation (Shapiro and Stockman, 2001) to extract geometry primitives (points, lines, arcs, and circles), following (Seo et al., 2015).",
"Then the diagram symbols and text regions are extracted through a strong object detector RetinaNet (Lin et al., 2017), and the textual content is further recognized by the optical character recognition tool MathPix 5 .",
"After obtaining the primitive set P and symbol set S , we need to ground each symbol with its associated primitives.",
"(Seo et al., 2015) adapts a greedy approach where each symbol is assigned to the closest primitive without considering its validity.",
"Instead, we formulate the grounding task as an optimization problem with the constraint of geometry relations: min (cid:88) s dist ( s i , p j ) 1 { s i assigns to p j } s.t. ( s i , p j ) Feasibility set F, (1) where the dist function measures the Euclidean distance between the symbol s i and primitive p j .",
"F defines the geometric constraints for symbol grounding.",
"For example, the parallel symbol could only be assigned to two lines with the same slopes and the perpendicular symbol is only valid to two orthogonal lines.",
"Unlike existing methods (Seo et al., 2015; Sachan et al., 2017; Alvin et al., 2017; Sachan et al., 2020), Inter-GPS achieves the explicit symbolic reasoning with the theorem knowledge base and the human-readable search process, shown in Figure",
"4. 6.1 Symbolic Geometry Solver Overall, Inter-GPS takes the relation set R and the theorem knowledge base set KB as inputs, and outputs the numeric solution g of the problem goal g .",
"The relation set R defines geometry attributes and relations in the given problem, and is initialized with literals from the text and diagram parsers.",
"R is further expanded with literals that are derived from definitions of geometry shapes.",
"For example, a triangle is defined as three connected sides.",
"So if there is a literal Triangle(A,B,C) , six more literals ( Ponit(A) , Ponit(B) , Ponit(C) , Line(A,B) , Line(B,C) , Line(C,A) ) will be appended to R .",
"The theorem set KB is represented as a set of theorems, where each theorem k i is written as a conditional rule with a premise p and a conclusion q .",
"For the search step t , if the premise p of k i matches the current relation set R t 1 , the relation set is updated according to the conclusion q : R t k i R t 1 , k i KB .",
"After the application of several theorems, equations between the known values and the unknown problem goal g are established, and g could be solved after solving these equations:",
"As the geometry problems in Geometry3K are collected from high school textbooks, it might need to apply multiple theorems before the problems are solved.",
"Intuitively, one possible search strategy is to use brute force to enumerate candidates in the theorem set randomly.",
"The random search strategy is inefficient and might lead to problems unsolvable as there might be applications of complicated theorems in the early stage.",
"Therefore, an ideal geometry problem solver can solve the problems using reasonable theorem application sequences.",
"Students with good academic performance can solve a problem with prior knowledge learning from a certain amount of problem solving training.",
"Inspired by this phenomenon, a theorem predictor is proposed to infer the possible theorem application sequence for inference after multiple attempts on the train data.",
"Recent studies (Loos et al., 2017; Balunovic et al., 2018) also suggest that neural guided search can speed up the search process.",
"There are no annotated theorem application sequences for data in Geometry3K due to tremendous worker labor.",
"Thus, we randomly sample from the theorem set multiple times to generate the application sequences.",
"A generated sequence is regarded as positive if the geometry solver Inter-GPS solves the problem after the application of that sequence.",
"A positive sequence with the minimum length for a problem is seen as pseudo-optimal.",
"Finally, after attempts, we collect 1,501 training samples with the problem and its pseudo-optimal theorem application sequence.",
"Given the problem formal description L = { l 1 , ..., l m } , the theorem predictor aims to reconstruct the pseudo-optimal theorem sequence T = { t 1 , ..., t n } token by token.",
"We formulate the generation task as a sequence-to-sequence (Seq2Seq) problem and use a transformer-based model (Lewis et al., 2020) to generate theorem sequence tokens.",
"Specifically, the transformer decoder predicts the next theorem order t i given T = { t 1 , ..., t i } .",
"The Seq2Seq model is trained to optimize the negative log-likelihood loss: LTP = n (cid:88) i =1 log p TP ( t i | t 1 , . . . , t i 1 ) , (4) Algorithm 1 Symbolic Geometry Solver Input Literals L , goal g , knowledge bases KB 1 , KB 2 Output Numeric goal value g and theorem application S 1: function SEARCH ( L , g , KB 1 , KB 2 ) 2: Initialize relation set R 0 with L , g = , S = 3: KB p THEOPREDICTOR ( L ) (cid:46) Predicted 4: for k i KB p do 5: R t k i R t 1 6: S .",
"where p TP is the parametrized conditional distribution in the theorem predictor model.",
"After the application of the theorem sequence predicted by the theorem predictor, it is likely that Inter-GPS still could not find the problem goal.",
"Generally, humans incline to use simple theorems first when solving math problems to reduce complex calculations.",
"If simple theorems are not tangible, they will turn to more complex theorems.",
"On account of that, we apply an efficient search strategy with heuristics driven by subject knowledge.",
"We categorize theorems into two groups: lower-order theorem set KB 1 and higher-order theorem set KB 2 .",
"The lower-order set KB 1 (e.g, Triangle Angle-Sum Theorem , Congruent Triangle Theorem ) only involves in two simple operations of addition and subtraction, while KB 2 (e.g, Law of Sines ) requires complex calculations.",
"In each following search step after using predicted theorems, we first enumerate theorems in the lower-order set KB 1 to update the relation set R : R t k i R t 1 , k i KB 1 .",
"If lower-order theorems fail to update R anymore, higher-order theorems are considered to update R",
"The search process stops once we find the problem goal g or the search steps reach the maximum steps allowed.",
"The whole search algorithm for Inter-GPS is presented in Algorithm",
"1. 7 Experiments 7.1 Experimental Settings Datasets and evaluation metrics.",
"We conduct experiments on the Geometry3K and GEOS (Seo et al., 2015) datasets.",
"The Geometry3K dataset involves 2,101 training data, 300 validation data, and 601 test data, respectively.",
"The GEOS dataset provides 55 official SAT problems for evaluating geometry solvers.",
"Regarding our proposed Inter-GPS model, if the one closest to the found solution among the four choices is exactly the ground truth, the found solution is considered correct.",
"For a fair comparison, if Inter-GPS fails to output the numeric value of the problem goal within allowed steps, it will randomly choose the one from the four candidates.",
"In terms of compared neural network baselines, the predicted answer has a maximum confidence score among choice candidates.",
"Baselines.",
"We implement several deep neural network baselines for geometry solvers to compare them with our method.",
"By default, these baselines formalize the geometry problem solving task as a classification problem, fed by the text embedding from a sequence encoder and the diagram representation from a visual encoder.",
"Q-only only encodes the problem text in the natural language by a bi-directional Gated Recurrent Unit (Bi-GRU) encoder (Cho et al., 2014).",
"I-only only encodes the problem diagram by a ResNet-50 encoder (He et al., 2016) as the input.",
"Q+I uses Bi-GRU and ResNet-50 to encode the text and diagram, respectively.",
"RelNet (Bansal et al., 2017) is implemented for embedding the problem text because it is a strong method for modeling entities and relations.",
"FiLM (Perez et al., 2018) is compared as it achieves effective visual reasoning for answering questions about abstract images.",
"FiLM-BERT uses the BERT encoder (Devlin et al., 2018) instead of the GRU encoder, and FiLM-BART uses the recently proposed BART encoder (Lewis et al., 2020).",
"Implementation details.",
"Main hyper-parameters used in the experiments are shown below.",
"For our symbolic solver, a set of 17 geometry theorems is collected to form the knowledge base.",
"For generating positive theorem sequences, each problem is attempted by 100 times with the maximum sequence length of 20.",
"The transformer model used in the theorem predictor has 6 layers, 12 attention heads, and a hidden embedding size of 768.",
"Search steps in Inter-GPS are set up to 100.",
"For the neural solvers, we choose the Adam optimizer and set the learning rate as 0.01, and the maximum epochs are set as 30.",
"Each experiment for Inter-GPS is repeated three times for more precise results.",
"Table 5 compares the results of symbolic solver Inter-GPS with baselines on our proposed Geometry3K dataset.",
"Apart from the overall accuracy, the results of different problem types are also reported.",
"Benefiting from symbolic reasoning with theorem knowledge, our Inter-GPS obtains an overall accuracy of 57.5%, significantly superior to all neural baselines.",
"Inter-GPS even attains a better accuracy compared to human beings.",
"Inter-GPS with ground truth formal language gains a further improvement of 20.8%.",
"Inter-GPS also obtains state-of-the-art performance over exiting geometry solvers on the GEOS dataset, as shown in Table",
"6. 7.3 Ablation Study and Discussion.",
"Search strategies.",
"The overall accuracy and average steps needed for solving problems with different search strategies in Inter-GPS are reported in Table",
"7. Predict refers to the strategy that uses the theorems from the theorem predictor followed by a random theorem sequence.",
"The strategy largely reduces the average steps to 6.5.",
"The final strategy in Inter-GPS applies the predicted theorems first and lower-order theorems in the remain search steps, and gains the best overall accuracy.",
"Problem parsers and literal sources.",
"The rule-based text parser achieves an accuracy of 97% while only 67% for the semantic text parser.",
"Table 8 reports the Inter-GPS performance fed with different sources of literals.",
"With literals generated from our problem solver, Inter-GPS achieves an accuracy of 57.5%.",
"The current text parser performs very well as there is only a slight gap between Inter-GPS with generated text literals and ground truth literals.",
"An improvement of 17.5% for Inter-GPS with annotated diagram literals indicates that there is still much space to improve for the diagram parser.",
"Searching step distribution.",
"Figure 5 compares correctly solved problem distribution by the average number of search steps in different strategies.",
"Our final Inter-GPS applies the Predict+Low-first strategy, with which 65.97% problems are solved in two steps and 70.06% solved in five steps.",
"Neural geometry solvers.",
"Current neural network baselines for geometry solving fail to achieve satisfactory results in the Geometry3K dataset.",
"It is because there are limited data samples for these neural methods to learn meaningful semantics from the problem inputs.",
"Besides, dense implicit representations might not be suitable for logical reasoning tasks like geometry problem solving.",
"We replace the inputs of problem text and diagram in the Q+I baseline with the ground truth textual and visual formal annotations and report the result in Table 9.",
"An improvement of 9.2% indicates the promising potential for neural network models for problem solving if structural representations with rich semantics are learned.",
"Failure cases.",
"Inter-GPS might not find a solution because of inaccurate parsing results and the incomplete theorem set.",
"Figure 6 illustrates some failure examples for Inter-GPS.",
"For example, diagram parsing tends to fail if there are ambiguous annotations or multiple primitives in the diagram.",
"It is difficult for the text parser to handle nested expressions and uncertain references.",
"And the symbolic solver is still not capable of solving complex problems with combined shapes and shaded areas 2 In rhombus ABCD , m DAB = 2 m ADC .",
"Interpretability in Inter-GPS.",
"Inter-GPS provides an interpretable symbolic solver for geometry problem solving.",
"First, Inter-GPS parses the problem contents into a structural representation of formal language.",
"Second, Inter-GPS performs symbolic reasoning to update the geometric relation set explicitly.",
"Last, Inter-GPS applies reasonable theorems sequentially in the search process.",
"Solving geometry problems is one of the most challenging tasks in math question answering.",
"In this paper, we propose a large-scale benchmark, Geometry3K, which consists of 3,002 high-school geometry problems with dense descriptions in formal language.",
"We further propose a novel geometry solving approach, Interpretable Geometry Problem Solver (Inter-GPS), which parses the problem as formal language from an automatic parser and performs symbolic reasoning over the theorem knowledge base to infer the answer.",
"Also, a theorem predictor with a low-first search strategy is designed to generate the reasonable theorem application sequence.",
"Experiment results show that Inter-GPS outperforms existing state-of-the-art methods by a large margin.",
"In the future, we plan to extend our work in other math question answering tasks and explore more general symbolic reasoning models.",
"This work was supported by MURI N00014-16-1-2007 and DARPA XAI N66001-17-2-4029.",
"We thank Johnson Zhou and Jiahao Li for collecting part of the data.",
"And we thank the help from Jian-heng Tang in baseline implementation.",
"The problems in Geometry3K are collected from online open sources.",
"The work in this paper may inspire the following research in symbolic reasoning and interpretable models and facilitate education."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"abstain",
"method"
] |
[
"We survey 146 papers analyzing bias in NLP systems, fnding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing bias is an inherently normative process.",
"We further fnd that these papers' proposed quantitative techniques for measuring or mitigating bias are poorly matched to their motivations and do not engage with the relevant literature outside of NLP.",
"Based on these fndings, we describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing bias in NLP systems.",
"These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate their conceptualizations of biasi.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statementsand to center work around the lived experiences of members of communities affected by NLP systems, while interrogating and reimagining the power relations between technologists and such communities.",
"A large body of work analyzing bias in natural language processing (NLP) systems has emerged in recent years, including work on bias in embedding spaces (e.g., Bolukbasi et al., 2016a; Caliskan et al., 2017; Gonen and Goldberg, 2019; May et al., 2019) as well as work on bias in systems developed for a breadth of tasks including language modeling (Lu et al., 2018; Bordia and Bowman,",
"2019), coreference resolution (Rudinger et al., 2018; Zhao et al., 2018a), machine translation (Van-massenhove et al., 2018; Stanovsky et al., 2019), sentiment analysis (Kiritchenko and Mohammad, 2018), and hate speech/toxicity detection (e.g., Park et al., 2018; Dixon et al., 2018), among others.",
"Although these papers have laid vital groundwork by illustrating some of the ways that NLP systems can be harmful, the majority of them fail to engage critically with what constitutes bias in the frst place.",
"Despite the fact that analyzing bias is an inherently normative processin which some system behaviors are deemed good and others harmfulpapers on bias in NLP systems are rife with unstated assumptions about what kinds of system behaviors are harmful, in what ways, to whom, and why.",
"Indeed, the term bias",
"(or gender bias or racial bias)",
"is used to describe a wide range of system behaviors, even though they may be harmful in different ways, to different groups, or for different reasons.",
"Even papers analyzing bias in NLP systems developed for the same task often conceptualize it differently.",
"For example, the following system behaviors are all understood to be self-evident statements of racial bias:",
"(a)",
"embedding spaces in which embeddings for names associated with African Americans are closer",
"(compared to names associated with European Americans)",
"to unpleasant words than pleasant words",
"(Caliskan et al., 2017);",
"(b)",
"sentiment analysis systems yielding different intensity scores for sentences containing names associated with African Americans and sentences containing names associated with European Americans",
"(Kir-itchenko and Mohammad, 2018); and",
"(c)",
"toxicity 5455 detection systems scoring tweets containing features associated with African-American English as more offensive than tweets without these features",
"(Davidson et al., 2019; Sap et al., 2019).",
"Moreover, some of these papers focus on racial bias expressed in written text, while others focus on racial bias against authors.",
"This use of imprecise terminology obscures these important differences.",
"We survey 146 papers analyzing bias in NLP systems, fnding that their motivations are often vague and inconsistent.",
"Many lack any normative reasoning for why the system behaviors that are described as bias are harmful, in what ways, and to whom.",
"Moreover, the vast majority of these papers do not engage with the relevant literature outside of NLP to ground normative concerns when proposing quantitative techniques for measuring or mitigating bias.",
"As a result, we fnd that many of these techniques are poorly matched to their motivations, and are not comparable to one another.",
"We then describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing bias in NLP systems.",
"We argue that such work should examine the relationships between language and social hierarchies; we call on researchers and practitioners conducting such work to articulate their conceptualizations of bias in order to enable conversations about what kinds of system behaviors are harmful, in what ways, to whom, and why; and we recommend deeper engagements between technologists and communities affected by NLP systems.",
"We also provide several concrete research questions that are implied by each of our recommendations.",
"Our survey includes all papers known to us analyzing bias in NLP systems146 papers in total.",
"We omitted papers about speech, restricting our survey to papers about written text only.",
"To identify the 146 papers, we frst searched the ACL Anthology 1 for all papers with the keywords bias or fairness that were made available prior to May 2020.",
"We retained all papers about social bias, and discarded all papers about other defnitions of the keywords",
"(e.g., hypothesis-only bias, inductive bias, media bias).",
"We also discarded all papers using bias in NLP systems to measure social bias in text or the real world",
"(e.g., Garg et al., 2018).",
"To ensure that we did not exclude any relevant 1 https://www.aclweb.org/anthology/ NLP task Papers Embeddings",
"papers without the keywords bias or fairness, we also traversed the citation graph of our initial set of papers, retaining any papers analyzing bias in NLP systems that are cited by or cite the papers in our initial set.",
"Finally, we manually inspected any papers analyzing bias in NLP systems from leading machine learning, humancomputer interaction, and web conferences and workshops, such as ICML, NeurIPS, AIES, FAccT, CHI, and WWW, along with any relevant papers that were made available in the Computation and Language and Computers and Society categories on arXiv prior to May 2020, but found that they had already been identifed via our traversal of the citation graph.",
"We provide a list of all 146 papers in the appendix.",
"In Table 1, we provide a breakdown of the NLP tasks covered by the papers.",
"We note that counts do not sum to 146, because some papers cover multiple tasks.",
"For example, a paper might test the effcacy of a technique for mitigating bias in embedding spaces in the context of sentiment analysis.",
"Once identifed, we then read each of the 146 papers with the goal of categorizing their motivations and their proposed quantitative techniques for measuring or mitigating bias.",
"We used a previously developed taxonomy of harms for this categorization, which differentiates between so-called allocational and representational harms",
"(Barocas et al., 2017; Crawford, 2017).",
"Allocational harms arise when an automated system allocates resources",
"(e.g., credit)",
"or opportunities",
"(e.g., jobs)",
"unfairly to different social groups; representational harms arise when a system",
"(e.g., a search engine)",
"represents some social groups in a less favorable light than others, demeans them, or fails to recognize their existence altogether.",
"Adapting and extending this taxonomy, we categorized the 146 papers' motivations and techniques into the following categories: .",
".",
"Representational harms: 2 .",
"Stereotyping that propagates negative generalizations about particular social groups.",
".",
"Differences in system performance for different social groups, language that misrepresents the distribution of different social groups in the population, or language that is denigrating to particular social groups.",
".",
"Questionable correlations between system behavior and features of language that are typically associated with particular social groups.",
".",
"Vague descriptions of bias",
"(or gender bias or racial bias)",
"or no description at all.",
".",
"Surveys, frameworks, and meta-analyses .",
"In Table 2 we provide counts for each of the six categories listed above.",
"(We also provide a list of the papers that fall into each category in the appendix.)",
"Again, we note that the counts do not sum to 146, because some papers state multiple motivations, propose multiple techniques, or propose a single technique for measuring or mitigating multiple harms.",
"Table 3, which is in the appendix, contains examples of the papers' motivations and techniques across a range of different NLP tasks.",
"Categorizing the 146 papers' motivations and proposed quantitative techniques for measuring or mitigating bias into the six categories listed above enabled us to identify several commonalities, which we present below, along with illustrative quotes.",
"2 We grouped several types of representational harms into two categories to refect that the main point of differentiation between the 146 papers' motivations and proposed quantitative techniques for measuring or mitigating bias is whether or not they focus on stereotyping.",
"Among the papers that do not focus on stereotyping, we found that most lack suffciently clear motivations and techniques to reliably categorize them further.",
"Papers state a wide range of motivations, multiple motivations, vague motivations, and sometimes no motivations at all.",
"We found that the papers' motivations span all six categories, with several papers falling into each one.",
"Appropriately, papers that provide surveys or frameworks for analyzing bias in NLP systems often state multiple motivations",
"(e.g., Hovy and Spruit, 2016; Bender, 2019; Sun et al., 2019; Rozado, 2020; Shah et al., 2020).",
"However, as the examples in Table 3",
"(in the appendix)",
"illustrate, many other papers",
"(33%)",
"do so as well.",
"Some papers",
"(16%)",
"state only vague motivations or no motivations at all.",
"For example, [N]o human should be discriminated on the basis of demographic attributes by an NLP system. Kaneko and Bollegala",
"(2019)",
"[P]rominent word embeddings [...] encode systematic biases against women and black people [...] implicating many NLP systems in scaling up social injustice. May et al.",
"(2019)",
"These examples leave unstated what it might mean for an NLP system to discriminate, what constitutes systematic biases, or how NLP systems contribute to social injustice",
"(itself undefned).",
"Papers' motivations sometimes include no normative reasoning.",
"We found that some papers",
"(32%)",
"are not motivated by any apparent normative concerns, often focusing instead on concerns about system performance.",
"For example, the frst quote below includes normative reasoningnamely that models should not use demographic information to make predictionswhile the other focuses on learned correlations impairing system performance.",
"In [text classifcation], models are expected to make predictions with the semantic information rather than with the demographic group identity information",
"( e.g., gay', black')",
"contained in the",
"sentences. Zhang et al.",
"(2020a)",
"An over-prevalence of some gendered forms in the training data leads to translations with identifable errors. Translations are better for sentences involving men and for sentences containing stereotypical gender roles. Saunders and Byrne",
"(2020)",
"Even when papers do state clear motivations, they are often unclear about why the system behaviors that are described as bias are harmful, in what ways, and to whom.",
"We found that even papers with clear motivations often fail to explain what kinds of system behaviors are harmful, in what ways, to whom, and why.",
"For example, Deploying these word embedding algorithms in practice, for example in automated translation systems or as hiring aids, runs the serious risk of perpetuating problematic biases in important societal contexts. Brunet et al.",
"(2019)",
"[I]f the systems show discriminatory behaviors in the interactions, the user experience will be adversely affected. Liu et al.",
"(2019)",
"These examples leave unstated what problematic biases or non-ideal user experiences might look like, how the system behaviors might result in these things, and who the relevant stakeholders or users might be.",
"In contrast, we fnd that papers that provide surveys or frameworks for analyzing bias in NLP systems often name who is harmed, acknowledging that different social groups may experience these systems differently due to their different relationships with NLP systems or different social positions.",
"For example, Ruane et al.",
"(2019)",
"argue for a deep understanding of the user groups [sic] characteristics, contexts, and interests when designing conversational agents.",
"Papers about NLP systems developed for the same task often conceptualize bias differently.",
"Even papers that cover the same NLP task often conceptualize bias in ways that differ substantially and are sometimes inconsistent.",
"Rows 3 and 4 of Table 3",
"(in the appendix)",
"contain machine translation papers with different conceptualizations of bias, leading to different proposed techniques, while rows 5 and 6 contain papers on bias in embedding spaces that state different motivations, but propose techniques for quantifying stereotyping.",
"Papers' motivations confate allocational and representational harms.",
"We found that the papers' motivations sometimes",
"(16%)",
"name immediate representational harms, such as stereotyping, alongside more distant allocational harms, which, in the case of stereotyping, are usually imagined as downstream effects of stereotypes on rsum fltering.",
"Many of these papers use the imagined downstream effects to justify focusing on particular system behaviors, even when the downstream effects are not measured.",
"Papers on bias in embedding spaces are especially likely to do this because embeddings are often used as input to other systems: However, none of these papers [on embeddings] have recognized how blatantly sexist the embeddings are and hence risk introducing biases of various types into real-world systems. Bolukbasi et al.",
"(2016a)",
"In contrast, papers that provide surveys or frameworks for analyzing bias in NLP systems treat representational harms as harmful in their own right.",
"For example, Mayfeld et al.",
"(2019)",
"and Ruane et al.",
"(2019)",
"cite the harmful reproduction of dominant linguistic norms by NLP systems",
"(a point to which we return in section 4), while Bender",
"(2019)",
"outlines a range of harms, including seeing stereotypes in search results and being made invisible to search engines due to language practices.",
"Papers' techniques are not well grounded in the relevant literature outside of NLP.",
"Perhaps unsurprisingly given that the papers' motivations are often vague, inconsistent, and lacking in normative reasoning, we also found that the papers' proposed quantitative techniques for measuring or mitigating bias do not effectively engage with the relevant literature outside of NLP.",
"Papers on stereotyping are a notable exception: the Word Embedding Association Test",
"(Caliskan et al., 2017)",
"draws on the Implicit Association Test",
"(Greenwald et al., 1998)",
"from the social psychology literature, while several techniques operationalize the well-studied Angry Black Woman stereotype",
"(Kiritchenko and Mohammad, 2018; May et al., 2019; Tan and Celis, 2019)",
"and the double bind faced by women",
"(May et al., 2019; Tan and Celis, 2019), in which women who succeed at stereotypically male tasks are perceived to be less likable than similarly successful men",
"(Heilman et al., 2004).",
"Tan and Celis",
"(2019)",
"also examine the compounding effects of race and gender, drawing on Black feminist scholarship on intersectionality",
"(Crenshaw, 1989).",
"Papers' techniques are poorly matched to their motivations.",
"We found that although 21% of the papers include allocational harms in their motivations, only four papers actually propose techniques for measuring or mitigating allocational harms.",
"Papers focus on a narrow range of potential sources of bias.",
"We found that nearly all of the papers focus on system predictions as the potential sources of bias, with many additionally focusing on bias in datasets",
"(e.g., differences in the number of gendered pronouns in the training data",
"(Zhao et al., 2019)).",
"Most papers do not interrogate the normative implications of other decisions made during the development and deployment lifecycle perhaps unsurprising given that their motivations sometimes include no normative reasoning.",
"A few papers are exceptions, illustrating the impacts of task defnitions, annotation guidelines, and evaluation metrics: Cao and Daum",
"(2019)",
"study how folk conceptions of gender",
"(Keyes, 2018)",
"are reproduced in coreference resolution systems that assume a strict gender dichotomy, thereby maintaining cisnormativity; Sap et al.",
"(2019)",
"focus on the effect of priming annotators with information about possible dialectal differences when asking them to apply toxicity labels to sample tweets, fnding that annotators who are primed are signifcantly less likely to label tweets containing features associated with African-American English as offensive.",
"We now describe how researchers and practitioners conducting work analyzing bias in NLP systems might avoid the pitfalls presented in the previous sectionthe beginnings of a path forward.",
"We propose three recommendations that should guide such work, and, for each, provide several concrete research questions.",
"We emphasize that these questions are not comprehensive, and are intended to generate further questions and lines of engagement.",
"(R1)",
"Ground work analyzing bias in NLP systems in the relevant literature outside of NLP that explores the relationships between language and social hierarchies.",
"Treat representational harms as harmful in their own right.",
"(R2)",
"Provide explicit statements of why the system behaviors that are described as bias are harmful, in what ways, and to whom.",
"Be forthright about the normative reasoning",
"(Green, 2019)",
"underlying these statements.",
"(R3)",
"Examine language use in practice by engaging with the lived experiences of members of communities affected by NLP systems.",
"Interrogate and reimagine the power relations between technologists and such communities.",
"Turning frst to",
"(R1)",
", we argue that work analyzing bias in NLP systems will paint a much fuller picture if it engages with the relevant literature outside of NLP that explores the relationships between language and social hierarchies.",
"Many disciplines, including sociolinguistics, linguistic anthropology, sociology, and social psychology, study how language takes on social meaning and the role that language plays in maintaining social hierarchies.",
"For example, language is the means through which social groups are labeled and one way that beliefs about social groups are transmitted",
"(e.g., Maass, 1999; Beukeboom and Burgers, 2019).",
"Group labels can serve as the basis of stereotypes and thus reinforce social inequalities: [T]he label content functions to identify a given category of people, and thereby conveys category boundaries and a position in a hierarchical taxonomy",
"(Beukeboom and Burgers, 2019).",
"Similarly, controlling images, such as stereotypes of Black women, which are linguistically and visually transmitted through literature, news media, television, and so forth, provide ideological justifcation for their continued oppression",
"(Collins, 2000, Chapter 4).",
"As a result, many groups have sought to bring about social changes through changes in language, disrupting patterns of oppression and marginal-ization via so-called gender-fair language",
"(Sczesny et al., 2016; Menegatti and Rubini, 2017), language that is more inclusive to people with disabilities",
"(ADA, 2018), and language that is less dehumanizing",
"(e.g., abandoning the use of the term illegal in everyday discourse on immigration in the U.S.",
"(Rosa, 2019)).",
"The fact that group labels are so contested is evidence of how deeply intertwined language and social hierarchies are.",
"Taking gender-fair language as an example, the hope is that reducing asymmetries in language about women and men will reduce asymmetries in their social standing.",
"Meanwhile, struggles over language use often arise from dominant social groups' desire to control both material and symbolic resourcesi.e., the right to decide what words will mean and to control those meaningsas was the case in some white speakers' insistence on using offensive place names against the objections of Indigenous speakers",
"(Hill, 2008, Chapter 3).",
"Sociolinguists and linguistic anthropologists have also examined language attitudes and language ideologies, or people's metalinguistic beliefs about language: Which language varieties or practices are taken as standard, ordinary, or unmarked?",
"Which are considered correct, prestigious, or appropriate for public use, and which are considered incorrect, uneducated, or offensive",
"(e.g., Campbell-5459 Kibler, 2009; Preston, 2009; Loudermilk, 2015; Lanehart and Malik, 2018)?",
"Which are rendered invisible",
"(Roche, 2019)?",
"3 Language ideologies play a vital role in reinforcing and justifying social hierarchies because beliefs about language varieties or practices often translate into beliefs about their speakers",
"(e.g. Alim et al., 2016; Rosa and Flores, 2017; Craft et al., 2020).",
"For example, in the U.S., the portrayal of non-white speakers' language varieties and practices as linguistically defcient helped to justify violent European colonialism, and today continues to justify enduring racial hierarchies by maintaining views of non-white speakers as lacking the language required for complex thinking processes and successful engagement in the global economy",
"(Rosa and Flores, 2017).",
"Recognizing the role that language plays in maintaining social hierarchies is critical to the future of work analyzing bias in NLP systems.",
"First, it helps to explain why representational harms are harmful in their own right.",
"Second, the complexity of the relationships between language and social hierarchies illustrates why studying bias in NLP systems is so challenging, suggesting that researchers and practitioners will need to move beyond existing algorithmic fairness techniques.",
"We argue that work must be grounded in the relevant literature outside of NLP that examines the relationships between language and social hierarchies; without this grounding, researchers and practitioners risk measuring or mitigating only what is convenient to measure or mitigate, rather than what is most normatively concerning.",
"More specifcally, we recommend that work analyzing bias in NLP systems be reoriented around the following question: How are social hierarchies, language ideologies, and NLP systems coproduced?",
"This question mirrors Benjamin's",
"(2020)",
"call to examine how race and technology are coproducedi.e., how racial hierarchies, and the ideologies and discourses that maintain them, create and are re-created by technology.",
"We recommend that researchers and practitioners similarly ask how existing social hierarchies and language ideologies drive the development and deployment of NLP systems, and how these systems therefore reproduce these hierarchies and ideologies.",
"As a starting point for reorienting work analyzing bias in NLP systems around this question, we 3 Language ideologies encompass much more than this; see, e.g., Lippi-Green",
"provide the following concrete research questions: .",
"How do social hierarchies and language ideologies infuence the decisions made during the development and deployment lifecycle?",
"What kinds of NLP systems do these decisions result in, and what kinds do they foreclose?",
"\u0005 General assumptions: To which linguistic norms do NLP systems adhere",
"(Bender, 2019; Ruane et al., 2019)?",
"Which language practices are implicitly assumed to be standard, ordinary, correct, or appropriate?",
"\u0005 Task defnition: For which speakers are NLP systems",
"(and NLP resources)",
"developed?",
"(See Joshi et al.",
"(2020)",
"for a",
"discussion.)",
"How do task defnitions discretize the world?",
"For example, how are social groups delineated when defning demographic attribute prediction tasks",
"(e.g., Koppel et al., 2002; Rosenthal and McKeown, 2011; Nguyen et al., 2013)?",
"What about languages in native language prediction tasks",
"(Tetreault et al., 2013)?",
"\u0005 Data: How are datasets collected, preprocessed, and labeled or annotated?",
"What are the impacts of annotation guidelines, annotator assumptions and perceptions",
"(Olteanu et al., 2019; Sap et al., 2019; Geiger et al., 2020), and annotation aggregation processes",
"(Pavlick and Kwiatkowski, 2019)?",
"\u0005 Evaluation: How are NLP systems evaluated?",
"What are the impacts of evaluation metrics",
"(Olteanu et al., 2017)?",
"Are any non-quantitative evaluations performed?",
"How do NLP systems reproduce or transform language ideologies?",
"Which language varieties or practices come to be deemed good or bad?",
"Might good language simply mean language that is easily handled by existing NLP systems?",
"For example, linguistic phenomena arising from many language practices",
"(Eisenstein, 2013)",
"are described as noisy text and often viewed as a target for normalization.",
"How do the language ideologies that are reproduced by NLP systems maintain social hierarchies?",
"Which representational harms are being measured or mitigated?",
"Are these the most normatively concerning harms, or merely those that are well handled by existing algorithmic fairness techniques?",
"Are there other representational harms that might be analyzed?",
".",
".",
"Turning now to",
"(R2)",
", we argue that work analyzing bias in NLP systems should provide explicit statements of why the system behaviors that are described as bias are harmful, in what ways, and to whom, as well as the normative reasoning underlying these statements.",
"In other words, researchers and practitioners should articulate their conceptualizations of bias.",
"As we described above, papers often contain descriptions of system behaviors that are understood to be self-evident statements of bias.",
"This use of imprecise terminology has led to papers all claiming to analyze bias in NLP systems, sometimes even in systems developed for the same task, but with different or even inconsistent conceptualizations of bias, and no explanations for these differences.",
"Yet analyzing bias is an inherently normative processin which some system behaviors are deemed good and others harmfuleven if assumptions about what kinds of system behaviors are harmful, in what ways, for whom, and why are not stated.",
"We therefore echo calls by Bardzell and Bardzell",
"(2011), Keyes et al.",
"(2019), and Green",
"(2019)",
"for researchers and practitioners to make their normative reasoning explicit by articulating the social values that underpin their decisions to deem some system behaviors as harmful, no matter how obvious such values appear to be.",
"We further argue that this reasoning should take into account the relationships between language and social hierarchies that we described above.",
"First, these relationships provide a foundation from which to approach the normative reasoning that we recommend making explicit.",
"For example, some system behaviors might be harmful precisely because they maintain social hierarchies.",
"Second, if work analyzing bias in NLP systems is reoriented to understand how social hierarchies, language ideologies, and NLP systems are coproduced, then this work will be incomplete if we fail to account for the ways that social hierarchies and language ideologies determine what we mean by bias in the frst place.",
"As a starting point, we therefore provide the following concrete research questions: .",
"What kinds of system behaviors are described as bias?",
"What are their potential sources",
"(e.g., general assumptions, task defnition, data)?",
".",
"In what ways are these system behaviors harmful, to whom are they harmful, and why?",
".",
"What are the social values",
"(obvious or not)",
"that underpin this conceptualization of bias? 4.3 Language use in practice Finally, we turn to",
"(R3)",
".",
"Our perspective, which rests on a greater recognition of the relationships between language and social hierarchies, suggests several directions for examining language use in practice.",
"Here, we focus on two.",
"First, because language is necessarily situated, and because different social groups have different lived experiences due to their different social positions",
"(Hanna et al., 2020)particularly groups at the intersections of multiple axes of oppressionwe recommend that researchers and practitioners center work analyzing bias in NLP systems around the lived experiences of members of communities affected by these systems.",
"Second, we recommend that the power relations between technologists and such communities be interrogated and reimagined.",
"Researchers have pointed out that algorithmic fairness techniques, by proposing incremental technical mitigationse.g., collecting new datasets or training better modelsmaintain these power relations by",
"(a)",
"assuming that automated systems should continue to exist, rather than asking whether they should be built at all, and",
"(b)",
"keeping development and deployment decisions in the hands of technologists",
"(Bennett and Keyes, 2019; Cifor et al., 2019; Green, 2019; Katell et al., 2020).",
"There are many disciplines for researchers and practitioners to draw on when pursuing these directions.",
"For example, in humancomputer interaction, Hamidi et al.",
"(2018)",
"study transgender people's experiences with automated gender recognition systems in order to uncover how these systems reproduce structures of transgender exclusion by redefning what it means to perform gender normally.",
"Value-sensitive design provides a framework for accounting for the values of different stakeholders in the design of technology",
"(e.g., Friedman et al., 2006; Friedman and Hendry, 2019; Le Dantec et al., 2009; Yoo et al., 2019), while participatory design seeks to involve stakeholders in the design process itself",
"(Sanders, 2002; Muller, 2007; Simonsen and Robertson, 2013; DiSalvo et al., 2013).",
"Participatory action research in education",
"(Kemmis, 2006)",
"and in language documentation and reclamation",
"(Junker, 2018)",
"is also relevant.",
"In particular, work on language reclamation to support decolonization and tribal sovereignty",
"(Leonard, 2012)",
"and work in sociolinguistics focus-5461 ing on developing co-equal research relationships with community members and supporting linguistic justice efforts",
"(e.g., Bucholtz et al., 2014, 2016, 2019)",
"provide examples of more emancipatory relationships with communities.",
"Finally, several workshops and events have begun to explore how to empower stakeholders in the development and deployment of technology",
"(Vaccaro et al., 2019; Givens and Morris, 2020; Sassaman et al., 2020)",
"4 and how to help researchers and practitioners consider when not to build systems at all",
"(Barocas et al., 2020).",
"As a starting point for engaging with communities affected by NLP systems, we therefore provide the following concrete research questions: .",
"How do communities become aware of NLP systems?",
"Do they resist them, and if so, how?",
".",
"What additional costs are borne by communities for whom NLP systems do not work well?",
".",
"Do NLP systems shift power toward oppressive institutions",
"(e.g., by enabling predictions that communities do not want made, linguistically based unfair allocation of resources or opportunities",
"(Rosa and Flores, 2017), surveillance, or censorship), or away from such institutions?",
".",
"Who is involved in the development and deployment of NLP systems?",
"How do decision-making processes maintain power relations between technologists and communities affected by NLP systems?",
"Can these processes be changed to reimagine these relations?",
"To illustrate our recommendations, we present a case study covering work on African-American English",
"(AAE).",
"5 Work analyzing bias in the context of AAE has shown that part-of-speech taggers, language identifcation systems, and dependency parsers all work less well on text containing features associated with AAE than on text without these features",
"(Jrgensen et al., 2015, 2016; Blodgett et al., 2016, 2018), and that toxicity detection systems score tweets containing features associated with AAE as more offensive than tweets without them",
"(Davidson et al., 2019; Sap et al., 2019).",
"These papers have been critical for highlighting AAE as a language variety for which existing NLP 4 Also https://participatoryml.github.io/ 5 This language variety has had many different names over the years, but is now generally called AfricanAmerican English",
"(AAE), African-American Vernacular English",
"(AAVE), or African-American Language",
"(AAL)",
"(Green, 2002; Wolfram and Schilling, 2015; Rickford and King, 2016).",
"systems may not work, illustrating their limitations.",
"However, they do not conceptualize racial bias in the same way.",
"The frst four of these papers simply focus on system performance differences between text containing features associated with AAE and text without these features.",
"In contrast, the last two papers also focus on such system performance differences, but motivate this focus with the following additional reasoning: If tweets containing features associated with AAE are scored as more offensive than tweets without these features, then this might",
"(a)",
"yield negative perceptions of AAE;",
"(b)",
"result in disproportionate removal of tweets containing these features, impeding participation in online platforms and reducing the space available online in which speakers can use AAE freely; and",
"(c)",
"cause AAE speakers to incur additional costs if they have to change their language practices to avoid negative perceptions or tweet removal.",
"More importantly, none of these papers engage with the literature on AAE, racial hierarchies in the U.S., and raciolinguistic ideologies.",
"By failing to engage with this literaturethereby treating AAE simply as one of many non-Penn Treebank varieties of English or perhaps as another challenging domainwork analyzing bias in NLP systems in the context of AAE fails to situate these systems in the world.",
"Who are the speakers of AAE?",
"How are they viewed?",
"We argue that AAE as a language variety cannot be separated from its speakers primarily Black people in the U.S., who experience systemic anti-Black racismand the language ideologies that reinforce and justify racial hierarchies.",
"Even after decades of sociolinguistic efforts to legitimize AAE, it continues to be viewed as bad English and its speakers continue to be viewed as linguistically inadequatea view called the defcit perspective",
"(Alim et al., 2016; Rosa and Flores, 2017).",
"This perspective persists despite demonstrations that AAE is rule-bound and grammatical",
"(Mufwene et al., 1998; Green, 2002), in addition to ample evidence of its speakers' linguistic adroitness",
"(e.g., Alim, 2004; Rickford and King, 2016).",
"This perspective belongs to a broader set of raciolinguistic ideologies",
"(Rosa and Flores, 2017), which also produce allocational harms; speakers of AAE are frequently penalized for not adhering to dominant language practices, including in the education system",
"(Alim, 2004; Terry et al., 2010), when seeking housing",
"(Baugh, 2018), and in the judicial system, where their testimony is misunderstood or, worse yet, disbelieved",
"(Rickford and King, 2016; Jones et al., 2019).",
"These raciolinguistic ideologies position racialized communities as needing linguistic intervention, such as language education programs, in which these and other harms can be reduced if communities accommodate to dominant language practices",
"(Rosa and Flores, 2017).",
"In the technology industry, speakers of AAE are often not considered consumers who matter.",
"For example, Benjamin",
"(2019)",
"recounts an Apple employee who worked on speech recognition for Siri: As they worked on different English dialects Australian, Singaporean, and Indian English [the employee] asked his boss: What about African American English?' To this his boss responded: Well, Apple products are for the premium market.' The reality, of course, is that speakers of AAE tend not to represent the premium market precisely because of institutions and policies that help to maintain racial hierarchies by systematically denying them the opportunities to develop wealth that are available to white Americans",
"(Rothstein, 2017) an exclusion that is reproduced in technology by countless decisions like the one described above.",
"Engaging with the literature outlined above situates the system behaviors that are described as bias, providing a foundation for normative reasoning.",
"Researchers and practitioners should be concerned about racial bias in toxicity detection systems not only because performance differences impair system performance, but because they reproduce longstanding injustices of stigmatization and disenfranchisement for speakers of AAE.",
"In re-stigmatizing AAE, they reproduce language ideologies in which AAE is viewed as ungrammatical, uneducated, and offensive.",
"These ideologies, in turn, enable linguistic discrimination and justify enduring racial hierarchies",
"(Rosa and Flores, 2017).",
"Our perspective, which understands racial hierarchies and raciolinguistic ideologies as structural conditions that govern the development and deployment of technology, implies that techniques for measuring or mitigating bias in NLP systems will necessarily be incomplete unless they interrogate and dismantle these structural conditions, including the power relations between technologists and racialized communities.",
"We emphasize that engaging with the literature on AAE, racial hierarchies in the U.S., and raciolinguistic ideologies can generate new lines of engagement.",
"These lines include work on the ways that the decisions made during the development and deployment of NLP systems produce stigmatization and disenfranchisement, and work on AAE use in practice, such as the ways that speakers of AAE interact with NLP systems that were not designed for them.",
"This literature can also help researchers and practitioners address the allocational harms that may be produced by NLP systems, and ensure that even well-intentioned NLP systems do not position racialized communities as needing linguistic intervention or accommodation to dominant language practices.",
"Finally, researchers and practitioners wishing to design better systems can also draw on a growing body of work on anti-racist language pedagogy that challenges the defcit perspective of AAE and other racialized language practices",
"(e.g. Flores and Chaparro, 2018; Baker-Bell, 2019; Martnez and Meja, 2019), as well as the work that we described in section 4.3 on reimagining the power relations between technologists and communities affected by technology.",
"By surveying 146 papers analyzing bias in NLP systems, we found that",
"(a)",
"their motivations are often vague, inconsistent, and lacking in normative reasoning; and",
"(b)",
"their proposed quantitative techniques for measuring or mitigating bias are poorly matched to their motivations and do not engage with the relevant literature outside of NLP.",
"To help researchers and practitioners avoid these pitfalls, we proposed three recommendations that should guide work analyzing bias in NLP systems, and, for each, provided several concrete research questions.",
"These recommendations rest on a greater recognition of the relationships between language and social hierarchiesa step that we see as paramount to establishing a path forward.",
"This paper is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1451512.",
"Any opinion, fndings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily refect the views of the National Science Foundation.",
"We thank the reviewers for their useful feedback, especially the suggestion to include additional details about our method."
] | [
"objective",
"objective",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other"
] |
[
"Recently, knowledge distillation (KD) has shown great success in BERT compression.",
"Instead of only learning from the teacher's soft label as in conventional KD, researchers find that the rich information contained in the hidden layers of BERT is conducive to the student's performance.",
"To better exploit the hidden knowledge, a common practice is to force the student to deeply mimic the teacher's hidden states of all the tokens in a layer-wise manner.",
"In this paper, however, we observe that although distilling the teacher's hidden state knowledge (HSK) is helpful, the performance gain (marginal utility) diminishes quickly as more HSK is distilled.",
"To understand this effect, we conduct a series of analysis.",
"Specifi-cally, we divide the HSK of BERT into three dimensions, namely depth, length and width.",
"We first investigate a variety of strategies to extract crucial knowledge for each single dimension and then jointly compress the three dimensions.",
"In this way, we show that 1) the student's performance can be improved by extracting and distilling the crucial HSK, and 2) using a tiny fraction of HSK can achieve the same performance as extensive HSK distillation.",
"Based on the second finding, we further propose an efficient KD paradigm to compress BERT, which does not require loading the teacher during the training of student.",
"For two kinds of student models and computing devices, the proposed KD paradigm gives rise to training speedup of 2.7 3.4 .",
"Since the launch of BERT (Devlin et al., 2019), pre-trained language models (PLMs) have been advancing the state-of-the arts (SOTAs) in a wide range of NLP tasks.",
"At the same time, the growing Work was done when Yuanxin Liu was an intern at Pattern Recognition Center, WeChat AI, Tencent Inc, China.",
"size of PLMs has inspired a wave of research interest in model compression (Han et al., 2016) in the NLP community, which aims to facilitate the deployment of the powerful PLMs to resource-limited scenarios.",
"Knowledge distillation (KD) (Hinton et al., 2015) is an effective technique in model compression.",
"In conventional KD, the student model is trained to imitate the teacher's prediction over classes, i.e., the soft labels.",
"Subsequently, Romero et al. (2015) find that the intermediate representations in the teacher's hidden layers can also serve as a useful source of knowledge.",
"As an initial attempt to introduce this idea to BERT compression, PKD (Sun et al., 2019) proposed to distill representations of the [CLS] token in BERT's hidden layers, and later studies (Jiao et al., 2020; Sun et al., 2020; Hou et al., 2020; Liu et al., 2021) extend the distillation of hidden state knowledge (HSK) to all the tokens.",
"In contrast to the previous work that attempts to increase the amount of HSK, in this paper we explore towards the opposite direction to compress HSK.",
"We make the observation that although distilling HSK is helpful, the marginal utility diminishes quickly as the amount of HSK increases.",
"To understand this effect, we conduct a series of analysis by compressing the HSK from three dimensions, namely depth, length and width (see Section 2.3 for detailed description).",
"We first compress each single dimension and compare a variety of strategies to extract crucial knowledge.",
"Then, we jointly compress the three dimensions using a set of compression configurations, which specify the amount of HSK assigned to each dimension.",
"Figure 1 shows the results on QNLI dataset.",
"We can find that 1) perceivable performance improvement can be obtained by extracting and distilling the crucial HSK, and 2) with only a tiny fraction of HSK the students can achieve the same performance as extensive HSK distillation.",
"Based on the second finding, we further propose an efficient paradigm to distill HSK.",
"Concretely, we run BERT over the training set to obtain and store a subset of HSK.",
"This can be done on cloud devices with sufficient computational capability.",
"Given a target device with limited resource, we can compress BERT and select the amount of HSK accordingly.",
"Then, the compressed model can perform KD on either the cloud or directly on the target device using the selected HSK and the original training data, dispensing with the need to load the teacher model.",
"In summary, our maojor contributions are: We observe the marginal utility diminishing effect of HSK in BERT KD.",
"To our knowledge, we are the first attempt to systematically study knowledge compression in BERT KD.",
"We conduct exploratory studies on how to extract the crucial knowledge in HSK, based on which we obtain perceivable improvements over a widely-used HSK distillation strategy.",
"We propose an efficient KD paradigm based on the empirical findings.",
"Experiments on the GLUE benchmark for NLU (Wang et al., 2019) show that, the proposal gives rise to training speedup of 2.7 3.4 for TinyBERT and ROSITA on GPU and CPU 1 .",
"text sequence x tokenized by WordPiece (Wu et al., 2016).",
"There are two special tokens in x : [CLS] is inserted in the left-most position to aggregate the sequence representation and [SEP] is used to separate text segments.",
"By summing up the token embedding, the position embedding and the segment embedding, the embedding layer outputs a sequence of vectors E = (cid:2) e 1 , , e | x | (cid:3) R | x | d H , where d H is the hidden size of the model.",
"Then, E passes through the stacked Transformer layers, which can be formulated as: H l = Trm l ( H l 1 ) , l [1 , L ] (1) where H l = (cid:2) h l, 1 , , h l, | x | (cid:3) R | x | d H is the outputs of the l th layer and H 0 = E .",
"Each Transformer layer is composed of two sub-layers: the multi-head self-attention layer and the feed-forward network (FFN).",
"Each sub-layer is followed by a sequence of dropout (Srivastava et al., 2014), residual connection (He et al., 2016) and layer normalization (Ba et al., 2016).",
"Finally, for the tasks of NLU, a task-specific classifier is employed by taking as input the representation of [CLS] in the L th layer.",
"Knowledge distillation is a widely-used technique in model compression, where the compressed model (student) is trained under the guidance of the original model (teacher).",
"This is achieved by minimizing the difference between the features produced by the teacher and the student: LKD = (cid:88) ( f S ,f T ) L (cid:0) f S ( x ) , f T ( x ) (cid:1) (2) where (cid:0) f S , f T (cid:1) is a pair of features from student and teacher respectively.",
"L is the loss function and x is a data sample.",
"In terms of BERT compression, the predicted probability over classes, the intermediate representations and the self-attention distributions can be used as the features to transfer.",
"In this paper, we focus on the intermediate representations { H l } Ll =0 (i.e., the HSK), which have shown to be a useful source of knowledge in BERT compression.",
"The loss function is computed as the Mean Squared Error (MSE) in a layer-wise way: LHSK = L (cid:48) (cid:88) l =0 MSE (cid:16) H Sl W , H Tg ( l ) (cid:17) (3) where L (cid:48) is the student's layer number and g ( l ) is the layer mapping function to select teacher layers.",
"W R d SH d TH is the linear transformation to project the student's representations H Sl to the same size as the teacher's representation H Tl .",
"According to Equation 3, the HSK from teacher can be stacked into a tensor (cid:98) HT = (cid:104) H Tg (0) , , H Tg ( L (cid:48) ) (cid:105) R ( L (cid:48) +1) | x | d TH , which consists of three structural dimensions, namely depth, length and width.",
"For the depth dimension, (cid:98) HT can be compressed by eliminating entire layers.",
"By dropping the representations corresponding to particular tokens, we compress the length dimension.",
"When it comes to the width dimension, we set the eliminated activations to zero.",
"We will discuss the strategies to compress each dimension later in Section",
"4. 3 Experimental Setups 3.1 Datasets We perform experiments on seven tasks from the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019): CoLA (linguistic acceptability), SST-2 (sentiment analy-sis), RTE, QNLI, MNLI-m and MNLI-mm (natural language inference), MRPC and STS-B (semantic matching/similarity).",
"Due to space limitation, we only report results on CoLA, SST-2, QNLI and MNLI for single-dimension HSK compression in Section 4, and results on the other three tasks are presented in Appendix E. 3.2 Evaluation Following (Devlin et al., 2019), for the dev set, we use Matthew's correlation and Spearman correlation to evaluate the performance on CoLA and STS-B respectively.",
"For the other tasks, we report the classification accuracy.",
"We use the dev set to conduct our exploratory studies and the test set results are reported to compare HSK compression with the existing distillation strategy.",
"For the test set of MRPC, we report the results of F1 score.",
"We take two representative KD-based methods, i.e., TinyBERT (Jiao et al., 2020) and ROSITA (Liu et al., 2021), as examples to conduct our analysis.",
"TinyBERT is a compact version of BERT that is randomly initialized.",
"It is trained with two-stage KD: first on the unlabeled general domain data and then on the task-specific training data.",
"ROSITA replaces the first stage KD with structured pruning and matrix factorization, which can be seen as a direct transfer of BERT's knowledge from the model parameters.",
"We focus on KD with the task-specific training data and do not use any data augmentation.",
"For TinyBERT, the student model is initialized with the 4-layer general distillation model provided by Jiao et al. (2020) (denoted as TinyBERT 4 ).",
"For ROSITA, we first fine-tune BERTBASE on the downstream task and then compress it following Liu et al. (2021) to obtain a 6-layer student model (denoted as ROSITA 6 ).",
"The fine-tuned BERTBASE is used as the shared teacher for TinyBERT and ROSITA.",
"Following Jiao et al. (2020), we first conduct HSK distillation as in Equation 3 (w/o distilling the self-attention distribution) and then distill the teacher's predictions using cross-entropy loss.",
"All the results are averaged over three runs with different random seeds.",
"The model architecture of the students and the hyperparameter settings can be seen in Appendix A and Appendix B respectively.",
"Researches on model pruning have shown that the structural units in a model are of different levels of importance, and the unimportant ones can be dropped without affecting the performance.",
"In this section, we investigate whether the same law holds for HSK compression in KD.",
"We study the three dimensions separately and compare a variety of strategies to extract the crucial knowledge.",
"When a certain dimension is compressed, the other two dimensions are kept to full scale.",
"From the layer point of view, HSK compression can be divided into two steps.",
"First, the layer mapping function g ( l ) selects one of the teacher layers for each student layer.",
"This produces L (cid:48) + 1 pairs of teacher-student features: (cid:104) ( HS 0 , HT g (0) ) , , ( HS L (cid:48) , HT g ( L (cid:48) ) ) (cid:105) .",
"Second, a subset of these feature pairs are selected to perform HSK distillation.",
"For the first step, a simple but effective strategy Embedding Text Sequence Trm 1 Trm 2 Trm 3 Trm 4 Trm 5 Trm 6 Embedding Text Sequence Trm 1 Trm 2 Trm 3 Student Teacher Embedding Text Sequence Trm 1 Trm 2 Trm 3 Trm 4 Trm 5 Trm 6 Embedding Text Sequence Trm 1 Trm 2 Trm 3 Student Teacher Figure 2: Illustration of the redesigned uniform layer mapping strategy.",
"is the uniform mapping function: g ( l ) = l L L (cid:48) , mod( L, L (cid:48) ) = 0 (4) In this way, the teacher layers are divided into L (cid:48) blocks and the top layer of each block serves as the guidance in KD.",
"Recently, Wang et al. (2020a) empirically show that the upper-middle layers of BERT, as compared with the top layer, are a better choice to guide the top layer of student in self-attention distillation.",
"Inspired by this, we redesign Equation 4 to allow the top student layer to distill knowledge from an upper-middle teacher layer, and the lower layers follow the uniform mapping principle.",
"This function can be formulated as: g ( l, L top ) = l round( L top L (cid:48) ) (5) where L top is the teacher layer corresponding to the top student layer and round() is the rounding-off operation.",
"Figure 2 gives an illustration of g ( l, L top ) with a 6-layer teacher and a 3-layer student.",
"Specifically, for the 12-layer BERTBASE teacher, we select L top from { 8 , 10 , 12 } .",
"For the second step, we simply keep the top ND feature pairs: { ( HS l , HT g ( l,L top ) ) } L (cid:48) l = L (cid:48) ND +1 .",
"Figure 3 presents the results of depth compression with different layer mapping functions.",
"We can find that: 1) For the g ( l, 12) mapping function (the grey lines), depth compression generally has a negative impact on the students' performance.",
"Specially, the performance of ROSITA 6 declines drastically when the number of layers is reduced to 1 3 .",
"2) In terms of the g ( l, 10) and g ( l, 8) mapping functions (the blue and orange lines), HSK distillation with only one or two layers can achieve comparable performance as using all the L (cid:48) + 1 layers.",
"On the QNLI and MNLI datasets, the performance can even be improved by eliminating the lower layers.",
"3) In general, the student achieves better results with the redesigned layer mapping function in Equation 5 across the four tasks.",
"This demonstrates that, like the self-attention knowledge, the most crucial HSK does not necessarily reside in the top BERT layer, which reveals a potential way to improve HSK distillation of BERT.",
"4) Compared with g ( l, 8) , the improvement brought by g ( l, 10) is more stable across different tasks and student models.",
"Therefore, we use the g ( l, 10) layer mapping function when investigating the other two dimensions.",
"To compress the length dimension, we design a method to measure the tokens' importance by using the teacher's self-attention distribution.",
"The intuition is that self-attention controls the information flow among tokens across layers, and thus the representations of the most attended tokens may contain crucial information.",
"Assuming that the teacher has A h attention heads, and the attention weights in the l th layer is A Tl = (cid:110) A Tl,a (cid:111) A h a =1 , where A Tl,a R | x || x | is the attention matrix of the a th head.",
"Each row of A Tl,a is the attention distribution of a particular token to all the tokens.",
"In our length compression strategy, the importance score of the tokens is the attention distribution of the [CLS] token (i.e., the first row in 0.05 0.10 0.15 0.20 0.25 0.30 25.0 27.5 30.0 32.5 35.0 37.5 40.0 42.5 C o LAM cc LeftAttAtt w/o [SEP] Att (L top =12) w/o [SEP] full length 0.05 0.10 0.15 0.20 0.25 0.30 85 86 87 88 89 90 91 SST2 A cc 0.00 0.05 0.10 0.15 0.20 HSK Length (Normalized) 85.0 85.5 86.0 86.5 87.0 87.5 QNLIA cc 0.00 0.05 0.10 0.15 0.20 HSK Length (Normalized) 79.0 79.5 80.0 80.5 81.0 81.5 MNLIm A cc Figure 4: Length compression results of ROSITA 6 on CoLA, SST-2, QNLI and MNLI.",
"AT l,a ) averaged over the A h heads: S l = 1 A h A h (cid:88) a =1 A Tl,a, 1 , S l R | x | (6) To match the depth of the student, we employ the layer mapping function in Equation 5 to select S g ( l,L top ) for the l th student layer.",
"The length compression strategies examined in this section are summarized as: Att is the attention-based strategy as described above.",
"The layer mapping function to select S is the same as the one to select HSK, i.e., g ( l, 10) .",
"Att ( L top = 12 ) w/o [SEP] is different from Att w/o [SEP] in that it utilizes g ( l, 12) to select S .",
"Left is a naive baseline that discards tokens from the tail of the text sequence.",
"When the token number is reduced to 1, the student only distills the HSK from the [CLS] token.",
"The length compression results are shown in Figure 4 and Figure",
"5. We can derive the following observations: 1) For all strategies, significant performance decline can only be observed when HSK length is compressed heavily (to less than 0 . 05 0 . 30 ).",
"In some cases, using a subset of tokens' representation even leads to perceivable 0.05 0.10 0.15 0.20 0.25 0.30 24 26 28 30 32 34 C o LAM cc LeftAttAtt w/o [SEP] Att (L top =12) w/o [SEP] full length 0.0 0.1 0.2 0.3 0.4 88.0 88.5 89.0 89.5 90.0 90.5 SST2 A cc 0.00 0.05 0.10 0.15 0.20 HSK Length (Normalized) 85.75 86.00 86.25 86.50 86.75 87.00 87.25 QNLIA cc 0.00 0.05 0.10 0.15 0.20 HSK Length (Normalized) 79.50 79.75 80.00 80.25 80.50 80.75 81.00 81.25 MNLIm A cc Figure 5: Length compression results of TinyBERT 4 on CoLA, SST-2, QNLI and MNLI.",
"improvement over the full length (e.g., ROSITA 6 on CoLA and TinyBERT 4 on SST-2 and QNLI).",
"2) The performance of Att is not satisfactory.",
"When being applied to ROSITA 6 , the Att strategy under-performs the Left baseline.",
"The results of Att in TinyBERT 4 , though better than those in ROSITA 6 , still lag behind the other strategies at the left-most points.",
"3) Excluding [SEP] in the Att strategy alleviates the drop in performance, especially when HSK length is compressed to less than 0.05.",
"4) As a general trend, further improvement over Att w/o [SEP] can be obtained by using g ( l, 12) in the selection of S , which produces the most robust results among the four strategies.",
"To explain why the Att strategy performs poorly, we inspect into the tokens that receive the highest importance scores under Equation 6.",
"We find that the special token [SEP] is dominant in most hidden layers.",
"As shown in Figure 6, from the 4 th 10 th 0.2 0.4 0.6 0.8 1.0 20 25 30 35 40 C o LAM cc 0.2 0.4 0.6 0.8 1.0 88.0 88.5 89.0 89.5 90.0 90.5 91.0 SST2 A cc 0.2 0.4 0.6 0.8 1.0 HSK Width (Normalized) 65 70 75 80 85 QNLIA cc ROSITA 6 Mag Mask ROSITA 6 Rand Mask ROSITA 6 Uniform Mask TinyBERT 4 Mag Mask TinyBERT 4 Rand Mask TinyBERT 4 Uniform Mask 0.2 0.4 0.6 0.8 1.0 HSK Width (Normalized) 77 78 79 80 81 MNLIm A cc Figure 7: Results of width compression with different masking strategies on CoLA, SST-2, QNLI and MNLI.",
"layers, [SEP] is the most attended token for almost all training samples.",
"Meanwhile, [SEP] frequently appears in the top three positions across all the layers.",
"Similar phenomenon was found in Clark et al. (2019), where [SEP] receives high attention scores from itself and other tokens in the middle layers.",
"Combining this phenomenon and the results in Figure 4 and Figure 5, it can be inferred that the representations of [SEP] is not a desirable source of knowledge for ROSITA and TinyBERT.",
"We conjecture that this is because there exists some trivial patterns in the representations of [SEP] , which prevents the student to extract the informative features that are more relevant to the task.",
"As discussed in Section 2.3, the width dimension is compressed by setting some activations in the intermediate representations to zero.",
"Practically, we apply a binary mask M R d TH to the vectors in HT l , which gives rise to (cid:104) M (cid:12) h Tl, 1 , , M (cid:12) h Tl, | x | (cid:105) , where (cid:12) denotes the element-wise product.",
"On this basis, we introduce and compare three masking designs for width compression: Rand Mask randomly set the values in M to zero, where the total number of 0 is controlled by the compression ratio.",
"This mask is static, i.e., h Tl,i ( i, l ) for all the training samples share the same mask.",
"Mag Mask masks out the activations with low magnitude.",
"Therefore, this mask is dynamic, i.e., every h Tl,i ( i, l ) has its own M .",
"The width compression results can be seen in Figure 7, from which we can obtain two findings.",
"First, the masks reveal different patterns when combined with different student models.",
"For ROSITA 6 , the performance of Rand Mask and Uniform Mask decreases sharply at 20% HSK width.",
"In comparison, the performance change is not that significant when it comes to TinyBERT 4 .",
"This suggests that TinyBERT 4 is more robust to HSK width compression than ROSITA 6 .",
"Second, the magnitude-based masking strategy obviously outperforms Rand Mask and Uniform Mask .",
"As we compress the nonzero activations in HSK from 100% to 20% , the performance drop of Mag Mask is only marginal, indicating that there exists considerable knowledge redundancy in the width dimension.",
"With the findings in single-dimension compression, we are now at a position to investigate joint HSK compression from the three dimensions.",
"For every single dimension, measuring the amount of HSK is straightforward: using the number of layers, tokens and activations for depth, length and width respectively.",
"In order to quantify the total amount of HSK (denoted as AHSK ), we define one unit of AHSK as the amount of HSK in any h Tl,i ( l [0 , L ] , i [1 , | x | ]) .",
"In other words, the AHSK of (cid:98) HT equals to ( L (cid:48) + 1) | x | .",
"When HSK is compressed to ND layers, NL tokens and NW activations, the AHSK is ND NL NW d TH .",
"Formally, the triplet ( ND , NL , NW ) defines a search space R ( L (cid:48) +1) | x | d TH of the configu-20",
"rations for three-dimension (3D) HSK compression, and we could have multiple combinations of ( ND , NL , NW ) that satisfy a particular AHSK .",
"In practice, we reconstruct the search space as: ND [1 , L (cid:48) + 1] , NL [1 , 50] , NW d TH { 0 .",
"To study the student's performance with different amounts of HSK, we sample a set of configurations for a range of AHSK , the statistics of which is summarized in Table 1.",
"Details of the configurations can be seen in Appendix C. To compress each single dimension in joint HSK compression, we utilize the most advantageous strategies that we found in Section",
"4. Specifically, Att ( L top = 12 ) w/o [SEP] is used to compress length, Mag Mask is used to compress width and the g ( l, L top ) for depth compression is selected according to the performance of depth compression.",
"The results of 3D joint HSK compression are presented in Figure 8 and Figure",
"9. As we can see, introducing HSK in KD brings consistent improvement to the conventional prediction distillation method.",
"However, the marginal benefit quickly diminishes as more HSK is included.",
"Typically, with less than 1% of HSK, the student models can achieve the same or better result as full-scale HSK distillation.",
"Over a certain threshold of AHSK , the Method CoLA SST-2 QNLI MNLI-m/mm MRPC RTE STS-B Avg Dev BERTBASE (T) 60.1 93.5 91.5 84.7/84.7 86.0 67.5 88.5 82.1 TinyBERT 4 29.8 89.7 87.2 81.0/81.4 82.4 64.7 85.1 75.2 w/ HSK compression 37.5 90.6 88.1 81.5/81.7 83.3 66.3 86.1 76.9 ROSITA 6 30.6 90.1 87.6 81.2/81.5 80.7 64.9 83.4 75.0 w/ HSK compression 43.0 91.6 88.2 81.8/82.0 80.9 68.0 87.2 77.8 Test BERTBASE (G) 52.1 93.5 90.5 84.6/83.4 88.9 66.4 85.8 80.7 TinyBERT 4 28.2 90.9 86.4 81.0/80.3 85.6 61.5 76.8 73.8 w/ HSK compression 30.6 90.6 87.3 81.5/80.8 85.4 61.7 79.0 74.6 ROSITA 6 28.1 90.5 87.0 81.5/80.4 83.0 61.7 73.9 73.3 w/ HSK compression 35.3 91.3 86.7 81.9/80.9 84.5 61.7 79.9 75.3 Table 2: Dev and test set performance of BERTBASE and KD-based BERT compression methods.",
"performance begins to decrease.",
"Among different tasks and student models, the gap between the best results (peaks on the blue lines) and full-scale HSK distillation varies from 0.3 (ROSITA 6 on MNLI and STS-B) to 5.3 (TinyBERT 4 on CoLA).",
"The results also suggest that existing BERT distillation method (i.e., g ( l, 12) ) can be improved by simply compressing HSK: Numerous points of different configurations lie over the red stars.",
"Table 2 presents the results of different KD-based BERT compression methods.",
"For fair comparison, we do not include other methods described in Section 7, because they either distill different type of knowledge or use different student model structure.",
"Here, we focus on comparing the performance with or without HSK compression given the same student model.",
"We can see that except for the results of a few tasks on the test sets, HSK compression consistently promotes the performance of the baseline methods.",
"Existing BERT compression methods mostly focus on improving the inference efficiency.",
"However, the teacher model is used to extract features throughout the training process, which suggests that the training efficiency still has room for improvement.",
"As shown in Figure 10, the compressed models achieve considerable inference speedup, while the increase in training speed is relatively small.",
"Moreover, for students with different sizes or architectures, the teacher should be deployed every time when training a new student.",
"Intuitively, we can run the teacher once and reuse the features for all the students.",
"In this way, we do not need to load the teacher model while training the student, and thereby increasing the training speed.",
"We refer 2.2x faster 2.0x 3.1x 2.8x 9.2x faster 6.9x 7.7x faster 4.0x 2.8x faster 2.5x 3.4x 2.7x Figure 10: Training time (left) and inference time (right) with different devices and models on MNLI.",
"To evaluate the training efficiency of the proposed KD paradigm, we compute the training time on the MNLI dataset.",
"The results are presented in the left plots of Figure",
"10. As we can see, offline HSK distillation increases the training speed of the student models, as compared with online distillation.",
"The speedup is consistent for different student models and devices.",
"Despite the training speedup, however, loading and storing HSK increases the memory consumption.",
"The full set of HSK can take up a large amount of space, especially for the pre-trained language models like BERT.",
"Fortunately, our findings in the previous sections suggest that the student only requires a tiny fraction of HSK.",
"consump-2 In the literature (Gou et al., 2020), offline distillation also means the teacher parameters are fixed during KD, which is different from our definition here.",
"tion of four configurations with different AHSK .",
"As we can see, the full set of HSK for ROSITA 6 takes up approximately 1 TB of memory space, which is only applicable to some high-end cloud servers.",
"Compressing the HSK can reduce the size to GB level, which enables training on devices like personal computers.",
"It is worth noticing that storing the dynamic Mag Mask is consuming, which typically accounts for more space than HSK.",
"However, the binary masks can be further compressed using some data compression algorithms.",
"Based on the above results and analysis, we summarize our paradigm for efficient HSK distillation as: First, the teacher BERT runs on the training data to obtain and store the features of HSK and predictions.",
"This can be done on devices that have sufficient computing and memory resources.",
"Then, according to the target application and device, we decide the student's structure and the amount of HSK to distill.",
"Finally, KD can be performed on a cloud server or directly on the target device.",
"KD is widely studied in BERT compression.",
"In addition to distilling the teacher's predictions as in Hinton et al. (2015), researches have shown that the student's performance can be improved by using the representations from intermediate BERT layers (Sun et al., 2019; Liu et al., 2021; Hou et al., 2020) and the self-attention distributions (Jiao et al., 2020; Sun et al., 2020).",
"Typically, the knowledge is extensively distilled in a layer-wise manner.",
"To fully utilize BERT's knowledge, some recent work also proposed to combine multiple teacher layers in BERT KD (Passban et al., 2021; Li et al., 2020) or KD on Transformer-based NMT models (Wu et al., 2020).",
"In contrast to these studies that attempt to increase the amount knowledge, we study BERT KD from the compression point of view.",
"Similar idea can be found in MiniLMs (Wang et al., 2020a,b), which only use the teacher's knowledge to guide the last layer of student.",
"However, they only consider knowledge from the layer dimension, while we investigate the three dimensions of HSK.",
"We explore a variety of strategies to determine feature importance for each single dimension.",
"This is related to a line of studies called the attribution methods, which attempt to attribute a neural net-work's prediction to the input features.",
"The attention weights have also been investigated as an attribution method.",
"However, prior work (Wiegreffe and Pinter, 2019; Serrano and Smith, 2019; Brunner et al., 2020; Hao et al., 2020) finds that attention weights usually fail to correlate well with their contributions to the final prediction.",
"This echoes with our finding that the original Att strategy performs poorly in length compression.",
"However, the attention weights may play different roles in attribution and HSK distillation.",
"Whether the findings in attribution are transferable to HSK distillation is still a problem that needs further investigation.",
"In this paper, we investigate the compression of HSK in BERT KD.",
"We divide the HSK of BERT into three dimensions and explore a range of compression strategies for each single dimension.",
"On this basis, we jointly compress the three dimensions and find that, with a tiny fraction of HSK, the student can achieve the same or even better performance as distilling the full-scale knowledge.",
"Based on this finding, we propose a new paradigm to improve the training efficiency in BERT KD, which does not require loading the teacher model during training.",
"The experiments show that the training speed can be increased by 2 .",
"7 3 .",
"4 for two kinds of student models and two types of CPU and GPU devices.",
"Most of the compression strategies investigated in this study are heuristic, which still have room for improvement.",
"Therefore, a future direction of our work could be designing more advanced algorithm to search for the most useful HSK in BERT KD.",
"Additionally, since HSK distillation in the pre-training stage is orders of magnitude time-consuming than task-specific distillation, the marginal utility diminishing effect in pre-training distillation is also a problem worth studying.",
"This work was supported by National Natural Science Foundation of China (No. 61976207, No. 61906187)."
] | [
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"result",
"objective",
"method",
"abstain",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"objective",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other"
] |
[
"Cross-domain sentiment classification aims to address the lack of massive amounts of labeled data.",
"It demands to predict sentiment polarity on a target domain utilizing a classifier learned from a source domain.",
"In this paper, we investigate how to efficiently apply the pre-training language model BERT on the unsupervised domain adaptation.",
"Due to the pre-training task and corpus, BERT is task-agnostic, which lacks domain awareness and can not distinguish the characteristic of source and target domain when transferring knowledge.",
"To tackle these problems, we design a post-training procedure, which contains the target domain masked language model task and a novel domain-distinguish pre-training task.",
"The post-training procedure will encourage BERT to be domain-aware and distill the domain-specific features in a self-supervised way.",
"Based on this, we could then conduct the adversarial training to derive the enhanced domain-invariant features.",
"Extensive experiments on Amazon dataset show that our model outperforms state-of-the-art methods by a large margin.",
"The ablation study demonstrates that the remarkable improvement is not only from BERT but also from our method.",
"Sentiment analysis aims to automatically identify the sentiment polarity of the textual data.",
"It is an essential task in natural language processing with widespread applications, such as movie reviews and product recommendations.",
"Recently, deep networks have significantly improved the state-of-the-art in sentiment analysis.",
"However, training deep networks is highly depended on a large amount of labeled training data which is time-consuming and requires expensive manual labeling.",
"Thus, there is a strong need to leverage or reuse rich labeled Corresponding author.",
"data from a different but related source domain.",
"Cross-domain sentiment analysis, which transfers the knowledge learned from source domain to a new target domain, becomes a promising direction.",
"The main challenge of cross-domain sentiment analysis is domain discrepancy due to different expression of the user's emotion across domains.",
"To address the problem, a wide-used approach is designed to extract domain invariant features, which means that the distributions of features from the source domain and target domain are similar (Zellinger et al., 2017; Persello and Bruzzone, 2016; Ganin et al., 2016; Yu and Jiang, 2016a).",
"One effective way to obtain domain-invariant features is adversarial training(Ganin et al., 2016; Li et al., 2017; Zheng et al., 2019).",
"Specifically, A domain discriminator is learned by minimizing the classification error of distinguishing the source from the target domains, while a deep classification model learns transferable representations that are indistinguishable by the domain discriminator.",
"Very recently, pre-trained language models have shown to be effective for improving many language tasks (Peters et al., 2018).",
"Bidirectional Encoder Representations from Transformers (BERT) realized a breakthrough, which pre-trains its encoder using language modeling and by discriminating surrounding sentences in a document from random ones (Devlin et al., 2019).",
"Pre-training in this manner can construct bidirectional contextual representation, and the large-scale pre-training enables BERT powerful ability in language understanding.",
"We only need to add one output layer and fine-tune BERT to get the state-of-the-art results in sentiment analysis.",
"Theoretically, BERT can enhance the performance in cross-domain sentiment analysis.",
"However, some important problems remain when directly fine-tuning BERT in the task of cross-domain sentiment analysis: Firstly, there is no labeled data in the target domain which brings many difficulties to the fine-tune procedure.",
"If we fine-tune BERT only by the source labeled data, the shift between training and test distributions will degrade the BERT's performance.",
"Secondly, BERT is task-agnostic and has almost no understanding of opinion text.",
"BERT is pre-trained by the universal language Wikipedia, leaving the domain challenges unresolved (Xu et al., 2019).",
"For example, in the pre-training procedure, BERT may learn to guess the [MASK] in The [MASK] is bright as sun.",
"But in a laptop sentiment analysis, it is more likely to be screen.",
"Especially, in the cross-domain sentiment analysis scenario, the labeled data is limited, which is in-sufficient to fine-tune BERT to ensure full domain-awareness.",
"Thirdly, cross-domain sentiment analysis also arises a new challenge for BERT to distinguish the characteristic of source and target domain to keep the transferable features and abandon domain-specific information.",
"To address above problems, we design a novel pre-training task to make BERT domain-aware and then improve the BERT's fine-tuning procedure by adversarial training.",
"Specifically, a novel post-training procedure is implemented that adapts BERT with unlabeled data from different domains to enhance the domain-awareness.",
"Apart from the target domain masked language model task, we introduce the domain-distinguish pre-training task.",
"BERT will be pre-trained to distinguish whether the two sentences come from the same domain.",
"The domain-distinguish pre-training task will encourage BERT to distill syntactic and semantic domain-specific features, so as to be domain-aware.",
"The proposed post-training procedure gives us a new way to fully utilize language knowledge from the target domain and link the source and target in a self-supervised way.",
"Based on this, we could then conduct the adversarial training to derive the enhanced domain-invariant features.",
"Experiments on Amazon reviews benchmark dataset show that our model gets the average result 90.12% in accuracy, 4.22% absolute improvement compared with state-of-the-art methods.",
"The ablation study shows that the remarkable improvement is not only from BERT but also from our method.",
"The contributions of this paper can be summarized as follows: We apply BERT to cross-domain sentiment analysis task and leverage the post-training method to inject the target domain knowledge to BERT.",
"A novel domain-distinguish pre-training task is proposed for the BERT post-training.",
"This design encourages BERT to be domain-aware and distill the domain-specific features in a self-supervised way.",
"Cross-domain sentiment analysis aims to generalize a classifier that is trained on a source domain, for which typically plenty of labeled data is available, to a target domain, for which labeled data is scarce.",
"There are many pivot-based methods (Blitzer et al., 2007a; Yu and Jiang, 2016b; Ziser and Reichart, 2018; Peng et al., 2018), which focus on inducing a low-dimensional feature representation shared across domains based on the co-occurrence between pivots and non-pivots.",
"However, selecting pivot words is very tedious, and the pivot words are manually selected, which may not be accurate.",
"Recently, some adversarial learning methods (Ganin et al., 2016; Li et al., 2017; Zheng et al., 2019) propose to train the feature generator to minimize the classification loss and simultaneously deceive the discriminator, which is end-to-end without manually selecting pivots.",
"Pre-trained language representations with self-supervised objectives have become standard in a wide range of NLP tasks.",
"Previous work can be divided into two main categories: feature-based approaches and fine-tuning approaches.",
"The recent proposed feature-based approaches mainly focus on learning contextualized word representations such as CoVe (McCann et al., 2017) and ELMo (Peters et al., 2018).",
"As with traditional word embeddings, these learned representations are also typically used as features in a downstream model.",
"On the other hand, fine-tuning approaches mainly pre-train a language model on a large corpus with an unsupervised objective and then fine-tune the model with in-domain labeled data for downstream applications.",
"The advantage of these approaches is that few parameters need to be learned from scratch.",
"Specifically, Howard and Ruder (2018) propose ULMFiT, which uses a different learning rate for each layer with learning warmup and gradual unfreezing.",
"GPT (Radford et al., 2018) uses a transformer encoder (Vaswani et al., 2017) instead of an LSTM and jointly fine-tunes with the language modeling objective.",
"Moreover, BERT (Devlin et al., 2019) is a large-scale language model consisting of multiple layers of transformer, which further incorporates bidirectional representations.",
"BERT is the state-of-art pre-training language model.",
"However, in the cross-domain sentiment analysis scenario, BERT is task-agnostic and can not distinguish the characteristic of source and target domain.",
"In this section, we introduce the proposed model for cross-domain sentiment analysis in detail.",
"We begin by giving the problem definition and notations.",
"Then BERT and post-training method are formally presented in the second subsection.",
"Finally, the adversarial training process is introduced.",
"We also give a theoretical analysis of our model.",
"In the task of cross-domain sentiment analysis, we are given two domains D s and D t which denote a source domain and a target domain, respectively.",
"In the source domain, D ls = { x is , y is } N ls i =1 are N ls labeled source domain examples, where x is means a sentence and y is is the corresponding polarity label.",
"There are also N us unlabeled data D us = { x is } N ls + N us i =1+ N ls in the source domain.",
"In the target domain, there is a set of unlabeled data D t = { x it } N t i =1 , where N t is the number of unlabeled data.",
"Cross-domain sentiment analysis demands us to learn a robust classifier trained on labeled source domain data to predict the polarity of unlabeled sentences from the target domain.",
"BERTBERT (Devlin et al., 2019) builds on the Transformer networks (Vaswani et al., 2017), which relies purely on attention mechanism and allows modeling of dependencies without regard to their distance in the input sequences.",
"BERT is pre-trained by predicting randomly masked words in the input (MLM task) and classifying whether the sentences are continuous or not (NSP task).",
"The MLM task allows the word representation to fuse the left and the right context, and the NSP task enables BERT to infer the sentences' relationship.",
"The pre-trained BERT can be easily fine-tuned by one softmax output layer for classification task.",
"Despite the success, BERT suffers from the domain challenge.",
"BERT is pre-trained by Wikipedia, leading to task-agnostic and little understanding of opinion text.",
"Especially in the cross-domain sentiment analysis scenario, the lack of abundant labeled data limits the fine-tune procedure, which degrades BERT due to the domain shift.",
"This task also demands BERT to distinguish the characteristic of source and target domain for better knowledge transfer.",
"Therefore, we propose BERT post-training which takes BERT's pre-trained weights as the initialization for basic language understanding and adapts BERT by novel self-supervised pre-trained tasks: domain-distinguish task and target domain masked language model.",
"The next sentence prediction (NSP) task encourages BERT to model the relationship between sentences beyond word-level, which benefits the task of Question Answering and Natural Language Inference.",
"However, cross-domain sentiment analysis operates on a single text sentence and does not require the inference ability.",
"Instead, the ability to distinguish domains plays an important role.",
"Therefore, during the post-training procedure, we replace the NSP task by domain-distinguish task (DDT).",
"Specifically, we construct the sentence-pair input: [CLS] sentence A [SEP] sentence B [SEP] , where [CLS] and [SEP] are special embeddings for classification and separating sentences.",
"50% of time sentence A and sentence B are all randomly sampled from target domain reviews, we label it TargetDomain .",
"And 50% of time sentence A and sentence B come from target domain and another domain, whose label is MixDomain .",
"We do not fix the collocation, in another word, we only ensure that the two sentences come from different domains but the order is random.",
"For example: Input = [CLS] The mouse is smooth and great [SEP] The screen is plain [SEP] Label = TargetDomain Input = [CLS] This book is boring [SEP] The system of the laptop is stable [SEP] Label = MixDomain The domain-distinguish pre-training is a classification task.",
"We add one output layer on the pooled representation and maximize the likelihood of the right label.",
"The domain-distinguish pre-training enables BERT to distill the specific features for different domains, which enhances the downstream adversarial training and benefits the cross-domain sentiment analysis.",
"To inject the target domain knowledge, we leverage the masked language model (MLM) (Devlin et al., 2019).",
"It requires to predict the randomly masked words in the sentence, which encourages BERT to construct a deep bidirectional representation.",
"In the cross-domain sentiment analysis, there are no labeled data but plenty of unlabeled data in the target domain to post-train BERT by MLM.",
"Specifically, we replace 15% of tokens by [MASK] at random.",
"The final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary.",
"We maximize the likelihood of the masked token id.",
"Post-training by unlabeled review data in target domain will effectively alleviate the shift of domain knowledge.",
"For example, if the masked word is an opinion word in This movie is [MASK] , this objective challenges BERT to learn the representation for fine-grained opinion words in movie review domain, such as touchable or disturbing.",
"One problem is that the DDT task mixes sentences from other domains in the sentence pair.",
"The sentence from other domains will be the noise which brings domain bias.",
"Therefore, we only mask the tokens in target domain sentences if the domain-distinguish task label is MixDomain .",
"The total loss of the post-training procedure is the sum of losses of target domain MLM and domain-distinguish task.",
"The adaptation takes about 5 hours to complete on one single NVIDIA P100 GPU.",
"The post-training procedure injects target domain knowledge and brings domain-awareness to BERT.",
"Based on the post-trained BERT, we now could utilize the adversarial training to abandon the distilled domain-specific features to derive the domain-invariant features.",
"Specifically, a sentiment classifier and a domain discriminator is designed operating on the hidden state h [ CLS ] of the special classification embedding [CLS] .",
"The sentiment classifier is simply a fully-connected layer and outputs the probabilities through a softmax layer:",
"The classifier is trained by the labeled data in the source domain and the loss function is cross-entropy:",
"The domain discriminator aims to predict domain labels of samples, i.e., coming from the source or target domain.",
"The parameters of BERT are optimized to maximize the loss of the domain discriminator.",
"This target will encourage BERT to fool the domain discriminator to generate domain-invariant features.",
"Specifically, before feeding h [ CLS ] to the domain discriminator, the hidden state of classification embedding [CLS] h [ CLS ] goes through the gradient reversal layer (GRL) (Ganin et al., 2016).",
"During the forward propagation, the GRL acts as an identity function but during the backpropagation, the GRL reverses the gradient by multiplying it by a negative scalar .",
"GRL can be formulated as a pseudo-function' Q ( x ) by two equations below in order to describe its forwardand backward-behaviors: Q ( x ) = x, (3) Q ( x ) x = I.",
"The target is to minimize the cross-entropy for all data from the source and target domains:",
"where d i { 0 , 1 } is the ground truth domain label.",
"Due to the GRL, the parameters for domain discriminator dd are optimized to increase the ability to predict domain labels, however, the parameters for BERT BERT are optimized to fool the domain discriminator, leading to domain-invariant features.",
"The sentiment classifier and the domain discriminator are jointly trained, and the total loss is:",
"The post-training procedure and our proposed domain-distinguish pre-training task will enhance the adversarial training to obtain lower classification error in the target domain, we will analyze it in Sec 3.5.",
"In this section, we provide a theoretical analysis of our approach.",
"First, we provide an insight into existing theory, then we introduce an expansion of the theory related to our method and explain how the post-training and adversarial training cooperate to obtain a remarkably better result than state-of-the-art methods.",
"For each domain, there is a labeling function on inputs X , defined as f : X [0 , 1] .",
"The ideal label functions for source and target domain are denoted as: f s and f t , respectively.",
"We define a hypothesis label function h : X [0 , 1] and a disagreement function: (cid:15) ( h 1 , h 2 ) = E [ | h 1 ( x ) h 2 ( x ) | ] .",
"Then the expected error on the source samples of h is defined as: (cid:15) s ( h ) = (cid:15) s ( h, f s ) .",
"For the target domain, we have: (cid:15) t ( h ) = (cid:15) t ( h, f t ) .",
"The divergence between source and target domain could thus be measured by H H -distance, which is defined as follows: d H H ( D s , D t ) = 2 sup h,h (cid:48) H | (cid:15) s ( h, h (cid:48) ) (cid:15) t ( h, h (cid:48) ) | (9) This distance is firstly proposed in (Ben-David et al., 2010) and frequently used to measure the adaptability between different domains (Shen et al., 2018; Chen et al., 2019).",
"Let H be the hypothesis class.",
"Given two different domains D s , D t , we have: h H, (cid:15) t ( h ) (cid:15) s ( h ) + 1 2 d H H ( D s , D t ) + C (10) This theorem means that the expected error on the target domain is upper bounded by three terms: (1) the expected error on the source domain; (2) the divergence between the distributions D s and D t ; (3) the error of the ideal joint hypothesis.",
"Normally, C is disregarded because it is considered to be negligibly small.",
"Therefore, the first and second terms are important quantitatively in computing the target error.",
"For the first term, the error rate of source domain (cid:15) s , it is easy to minimize with source labeled training data.",
"Moreover, we adopt BERT, which brings powerful contextual representation for lower error rate.",
"The second item in Eq.",
"10 demands us to generate similar features among different domains.",
"Our proposed domain-distinguish pre-training task and post-training for BERT enable the model to identify the specific features for different domains.",
"This ability will enhance the domain discriminator, which will help to find more complicated domain specific features and get abandoned by adversarial training.",
"Therefore, we further decrease the divergence between the domains, which is quantitatively measured by A -distance in Sec 4.6.",
"In this section, we empirically evaluate the performance of our proposed methods.",
"We conduct the experiments on the widely-used Amazon reviews benchmark datasets collected by (Blitzer et al., 2007b).",
"It contains reviews from four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K).",
"For each domain, there are 2,000 labeled reviews and approximately 4000 unlabeled reviews.",
"Following the convention of previous works (Ziser and Re-ichart, 2018; Ganin et al., 2016; Qu et al., 2019), we construct 12 cross-domain sentiment analysis tasks.",
"For each task, we employ a 5-fold cross-validation protocol, that is, in each fold, 1600 balanced samples are randomly selected from the labeled data for training and the rest 400 for validation.",
"We adopt BERT base (uncased) as the basis for all experiments.",
"When generating the post-training data, each sentence in the target domain gets duplicated 10 times with different masks and sentences pair.",
"We limit the maximum sequence length is 256.",
"During the post-training, we train with batch size of 16 for 10000 steps.",
"The optimizer is Adam with learning rate 2e-5, 1 = 0 .",
"9 , 2 = 0 .",
"999 , L2 weight decay of 0.01.",
"During the adversarial training, The weights in sentiment classifier and domain discriminator are initialized from a truncated normal distribution with mean 0.0 and stddev 0.02.",
"In the gradient reversal layer (GRL), we define the training progress as p = tT , where t and T are current training step and the maximum training step, respectively, and the adaptation rate is increased as = 2 1+exp( 10 p ) 1 .",
"We compare our method with 5 state-of-the-art methods: DANN (Ganin et al., 2016), PBLM (Ziser and Reichart, 2018), HATN (Li et al., 2018), ACAN (Qu et al., 2019), IATN (Zhang et al., 2019).",
"We also design several variants of BERT as baselines: BERT : Fine-tuning vanilla BERT by the source domain labeled data.",
"HATN-BERT : HATN (Li et al., 2018) model based on BERT.",
"BERT-AT : This method conducts the adversarial training operating on vanilla BERT.",
"BERT-DA : Fine-tuning domain-aware BERT by the source domain labeled data.",
"The domain-aware BERT is obtained by post-training.",
"BERT-DAAT : Our proposed method introduced in Sec 3.",
"Table 2 shows the classification accuracy of different methods.",
"We can observe that the proposed BERT-DAAT outperforms all other methods.",
"For the previous models, they mostly base on the word2vec (Mikolov et al., 2013) or glove (Pen-nington et al., 2014).",
"Compared to BERT's contextual word representation, they can not model complex characteristics of word use and how these uses vary across linguistic contexts, resulting in relatively worse overall performance.",
"We can see that the vanilla BERT, which is fine-tuned only by the source domain labeled data without utilizing target domain data, can still outperform all the previous methods.",
"For fair comparison, we reproduce the experiment of HATN model (Li et al., 2018) that incorporates BERT as the base model.",
"As shown in Table 2, HATN-BERT achieves a comparable result with BERT-AT.",
"For the BERT variants, we did not see a remarkable improvement in the results of BERT-AT, which conducts adversarial training on BERT.",
"It demonstrates that, in the task of cross-domain sentiment analysis, the bottleneck of BERT is the lack of domain-awareness and can not be tackled purely -80 -60 -40 -20 0 20 40 60 -50 0 50 BERT Source negative Source positive Target negative Target positive -80 -60 -40 -20 0 20 40 60 -80 -60 -40 -20 0 20 40 60 80 Source negative Source positive Target negative Target positive BERT-DA -100 -50 0 50 -50 0 50 Source negative Source positive Target negative Target positive BERT-AT -80 -60 -40 -20 0 20 40 60 80 -60 -40 -20 0 20 40 60 80 Source negative Source positive Target negative Target positive BERT-DAAT Figure 1: The effect of post-training and adversarial training on the distribution of the extracted features.",
"by adversarial training.",
"On the contrary, the post-training procedure could improve the result by 1.12% on average.",
"It verifies the effectiveness of our proposed post-training methods that could inject the domain knowledge to BERT.",
"As expected, BERT-DAAT performs best among the variants of BERT, 0.75% absolute improvement to BERT-DA and 1.87% absolute improvement to BERT, showing that the post-training procedure could further enhance the adversarial training.",
"To intuitively assess the effects of the post-training and adversarial training on BERT, we further perform a visualization of the feature representations of the variants of BERT for the training data in the source domain and the testing data in the target domain for the B E task.",
"As shown in Figure 1, the graphs are obtained by applying t-SNE on the set of all representation of source and target data points.",
"Every sample is mapped into a 768-dimensional feature space through BERT and projected back into a two-dimensional plane by the t-SNE.",
"In the vanilla BERT representation (first subgraph in Figure 1), we could observe that data points of different polarities in source domain are well separated.",
"While for the target domain, some data points are mixed together.",
"It shows that only utilizing source domain labeled data is not enough for the target domain classification.",
"For the post-trained BERT (subgraph for BERT-DA), data points belong to four clusters, indicating that domains and sentiment polarities are both well classi-fied.",
"It verifies that our post-training strategy brings domain-awareness to BERT.",
"Moreover, compared to the first subgraph, the boundary for sentiment polarity classification is more clear, showing that injecting domain knowledge by post-training is beneficial to sentiment classification.",
"The latter two subgraphs in Figure 1 are the feature distributions obtained by adversarial training.",
"One common characteristic is that data samples from different domains are very close to each other through adversarial training.",
"However, the boundary for sentiment polarity classification is not very clear in BERT-AT's feature representation, resulting in degraded performance.",
"For our proposed BERT-DAAT, the post-training enables the domain-awareness and help to distill more complicated domain specific features.",
"The adversarial training is thus enhanced to get more domain-invariant features.",
"We can find that target points are homogeneously spread out among source points, which decreases the divergence between the domains.",
"According to Theorem 10, it can lower the upper boundary of the target error.",
"Theorem 10 shows that the divergence between domains d H H ( D s , D t ) plays an important role.",
"To quantitatively measure it, we compare the A distance, which is usually used to measure domain discrepancy (Ben-David et al., 2010).",
"The definition of A -distance is: d A = 2(1 2 (cid:15) ) , where (cid:15) is the generalization error of a classifier trained with the binary classification task of discriminating the source domain and target domain.",
"More precisely, to obtain A -distance, we firstly split source and target domain data into two subsets of equal size and get the feature representation.",
"We then train a linear SVM on the first subset to predict which domain the sample comes from.",
"The error rate (cid:15) could be calculated on the second subset through the trained SVM, and A -distance is obtained by d A = 2(1 2 (cid:15) ) .",
"We compare the A -distance of BERT, BERT-AT, and BERT-DAAT.",
"Results are shown in Figure 2.",
"For each cross-domain sentiment analysis task, BERT BERT-AT BERT-DAAT Figure 2: Comparison of A -distance of different models.",
"the A -distance of BERT is highest.",
"It is easy to conclude that applying adversarial training can effectively decrease the A -distance.",
"Overall, the A distance of BERT-DAAT is lower than BERT-AT, verifying that the post-training could enhance the adversarial training to decrease the domain discrepancy.",
"To analyze the effect of different components including post-training steps and post-training tasks, we conduct the ablation experiments.",
"In this subsection, we study the effect of post-training steps.",
"Figure 3 presents the accuracy on the task of E K based on the checkpoint that has been post-trained for k steps.",
"The results for BERT-DA are obtained by fine-tuning source domain labeled data, BERT-DAAT is adversarial training by source labeled data and target unlabeled data.",
"We find that, with limited post-training steps (fewer than 5000 steps), BERT-DA and BERT-DAAT perform similarly with BERT and BERT-AT, respectively.",
"However, given post-training steps more than 5000, both the results of BERT-DA and BERT-DAAT see an increase.",
"Especially, after post-training more than 5000 steps, BERT-DAAT shows remarkable strengths compared to BERT-DA.",
"This shows that plenty of post-training steps is necessary to inject domain knowledge and domain-awareness.",
"The post-training tasks in our work include target domain masked language model (MLM) and our proposed domain-distinguish task (DDT).",
"We design two models which ablate MLM and DDTA cc u r ac y ( % ) Post-training steps BERT-DA BERT-DAAT Figure 3: Ablation study on the number of post-training steps.",
"separately and compare them with BERT-DAAT on the tasks of D B, E B, and K B. Results in Table 2 indicate that: the target domain masked language model task (MLM) and domain-distinguish task(DDT) are both beneficial to cross-domain sentiment analysis.",
"In this paper, we propose the BERT-DAAT model for cross-domain sentiment analysis.",
"Our purpose is to inject the target domain knowledge to BERT and encourage BERT to be domain-aware.",
"Specifically, we conduct post-training and adversarial training.",
"A novel domain-distinguish pre-training task is designed to distill the domain-specific features in a self-supervised.",
"Experimental results on Amazon dataset demonstrate the effectiveness of our model, which remarkably outperforms state-of-the-art methods.",
"The proposed post-training procedure could also be applied to other domain adaptation scenarios such as named entity recognition, question answering, and reading comprehension.",
"In the future, we would like to investigate the application of our theory in these domain adaptation tasks.",
"This work was supported in part by the National Key R&D Program of China 2018YFB1800502, in part by the National Natural Science Foundation of China under Grants 61671079 and 61771068, in part by the Beijing Municipal Natural Science Foundation under Grant 4182041, and in part by the Ministry of Education and China Mobile Joint Fund MCM20180101.",
"This work was also supported by BUPT Excellent Ph.D.",
"Students Foundation CX2020206."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve Spanish-English ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available.",
"Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio.",
"Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.",
"Speech-to-text Translation (ST) has many potential applications for low-resource languages: for example in language documentation, where the source language is often unwritten or endangered (Be-sacier et al., 2006; Martin et al., 2015; Adams et al., 2016a,b; Anastasopoulos and Chiang, 2017); or in crisis relief, where emergency workers might need to respond to calls or requests in a foreign language (Munro, 2010).",
"Traditional ST is a pipeline of automatic speech recognition (ASR) and machine translation (MT), and thus requires transcribed source audio to train ASR and parallel text to train MT. These resources are often unavailable for low-resource languages, but for our potential applications, there may be some source language audio paired with target language text translations.",
"In these scenarios, end-to-end ST is appealing.",
"Recently, Weiss et al. (2017) showed that end-to-end ST can be very effective, achieving an impressive BLEU score of 47.3 on Spanish-English ST. But this result required over 150 hours of translated audio for training, still a substantial resource requirement.",
"By comparison, a similar system trained on only 20 hours of data for the same task achieved a BLEU score of 5.3 (Bansal et al., 2018).",
"Other low-resource systems have similarly low accuracies (Anastasopoulos and Chiang, 2018; Berard et al., 2018).",
"To improve end-to-end ST in low-resource settings, we can try to leverage other data resources.",
"For example, if we have transcribed audio in the source language, we can use multi-task learning to improve ST (Anastasopoulos and Chiang, 2018; Weiss et al., 2017; Berard et al., 2018).",
"But source language transcriptions are unlikely to be available in our scenarios of interest.",
"Could we improve low-resource ST by leveraging data from a high-resource language?",
"For ASR, training a single model on multiple languages can be effective for all of them (Toshniwal et al., 2018b; Deng et al., 2013).",
"For MT, transfer learning (Thrun, 1995) has been very effective: pretraining a model for a high-resource language pair and transferring its parameters to a low-resource language pair when the target language is shared (Zoph et al., 2016; Johnson et al., 2017).",
"Inspired by these successes, we show that low-resource ST can leverage transcribed audio in a high-resource target language, or even a different language altogether, simply by pre-training a model for the high-resource ASR task, and then transferring and fine-tuning some or all of the model's parameters for low-resource ST. We first test our approach using Spanish as the source language and English as the target.",
"After training an ASR system on 300 hours of English, fine-tuning on 20 hours of Spanish-English yields a BLEU score of 20.2, compared to only 10.8 for an ST model without ASR pre-training.",
"Analyzing this result, we discover that the main benefit of pre-training arises from the transfer of the encoder parameters, which model the input acoustic signal.",
"In fact, this effect is so strong that we also obtain improvements by pre-training on a language that differs from both the source and the target: pre-training on French and fine-tuning on Spanish-English.",
"We hypothesize that pre-training the encoder parameters, even on a different language, allows the model to better learn about linguistically meaningful phonetic variation while normalizing over acoustic variability such as speaker and channel differences.",
"We conclude that the acoustic-phonetic learning problem, rather than translation itself, is one of the main difficulties in low-resource ST. A final set of experiments confirm that ASR pretraining also helps on another language pair where the input is truly low-resource: Mboshi-French.",
"For both ASR and ST, we use an encoder-decoder model with attention adapted from Weiss et al. (2017), Berard et al. (2018) and Bansal et al. (2018), as shown in Figure 1.",
"We use the same model architecture for all our models, allowing us to conveniently transfer parameters between them.",
"We also constrain the hyper-parameter search to fit a model into a single Titan X GPU, allowing us to maximize available compute resources.",
"We use a pre-trained English ASR model to initialize training of Spanish-English ST models, and a pre-trained French ASR model to initialize training of Mboshi-French ST models.",
"During ST training, all model parameters are updated.",
"In these configurations, the decoder shares the same vocabulary across the ASR and ST tasks.",
"This is practical for settings where the target text language is high-resource with ASR data available.",
"In settings where both ST languages are low-resource, ASR data may only be available in a third language.",
"To test whether transfer learning will help in this setting, we use a pre-trained French ASR model to train Spanish-English ST models; and English ASR for Mboshi-French models.",
"In these cases, the ST languages are different from the Process LSTM CNN speech features (MFCCs) Attention <s> c ly lear LSTM EMBEDDINGS FULLY CONNECTED SOFTMAX input:Spanish speech English reference text or prediction from previous timestep using BPE subword units Decoder Encoder <e> c ly lear output: English text prediction Figure 1: Encoder-decoder with attention model architecture for both ASR and ST. The encoder input is the Spanish speech utterance claro , translated as clearly , represented as BPE (subword) units.",
"ASR language, so we can only transfer the encoder parameters of the ASR model, since the dimensions of the decoder's output softmax layer are indexed by the vocabulary, which is not shared.",
"1 Sharing only the speech encoder parameters is much easier, since the speech input can be preprocessed in the same manner for all languages.",
"This form of transfer learning is more flexible, as there are no constraints on the ASR language used.",
"English ASR.",
"We use the Switchboard Telephone speech corpus (Godfrey and Holliman, 1993), which consists of around 300 hours of English speech and transcripts, split into 260k utterances.",
"The development set consists of 5 hours that we removed from the training set, split into 4k utterances.",
"French ASR.",
"We use the French speech corpus from the GlobalPhone collection (Schultz, 2002), which consists of around 20 hours of high quality read speech and transcripts, split into 9k utterances.",
"The development set consists of 2 hours, split into 800 utterances.",
"Spanish-English ST. We use the Fisher Spanish speech corpus (Graff et al., 2010), which consists of 160 hours of telephone speech in a variety of Spanish dialects, split into 140K utterances.",
"To simulate low-resource conditions, we construct smaller train-1 Using a shared vocabulary of characters or subwords is an interesting direction for future work, but not explored here.",
"ing corpora consisting of 50, 20, 10, 5, or 2.5 hours of data, selected at random from the full training data.",
"The development and test sets each consist of around 4.5 hours of speech, split into 4K utterances.",
"We do not use the corresponding Spanish transcripts; our target text consists of English translations that were collected through crowdsourcing (Post et al., 2013, 2014).",
"Mboshi-French ST. Mboshi is a Bantu language spoken in the Republic of Congo, with around 160,000 speakers.",
"2 We use the Mboshi-French parallel corpus (Godard et al., 2018), which consists of around 4 hours of Mboshi speech, split into a training set of 5K utterances and a development set of 500 utterances.",
"Since this corpus does not include a designated test set, we randomly sampled and removed 200 utterances from training to use as a development set, and use the designated development data as a test set.",
"Speech.",
"We convert raw speech input to 13-dimensional MFCCs using Kaldi (Povey et al., 2011).",
"3 We also perform speaker-level mean and variance normalization.",
"Text.",
"The target text of the Spanish-English data set contains 1.5M word tokens and 17K word types.",
"If we model text as sequences of words, our model cannot produce any of the unseen word types in the test data and is penalized for this, but it can be trained very quickly (Bansal et al., 2018).",
"If we instead model text as sequences of characters as done by Weiss et al. (2017), we would have 7M tokens and 100 types, resulting in a model that is open-vocabulary, but very slow to train (Bansal et al., 2018).",
"As an effective middle ground, we use byte pair encoding (BPE; Sennrich et al., 2016) to segment each word into subwords, each of which is a character or a high-frequency sequence of characterswe use 1000 of these high-frequency sequences.",
"Since the set of subwords includes the full set of characters, the model is still open vocabulary; but it results in a text with only 1.9M tokens and just over 1K types, which can be trained almost as fast as the word-level model.",
"The vocabulary for BPE depends on the fre-2 ethnologue.com/language/mdw 3 In preliminary experiments, we did not find much difference between between MFCCs and more raw spectral representations like Mel filterbank features.",
"quency of character sequences, so it must be computed with respect to a specific corpus.",
"For English, we use the full 160-hour Spanish-English ST target training text.",
"For French, we use the Mboshi-French ST target training text.",
"Speech encoder.",
"As shown schematically in Figure 1, MFCC feature vectors, extracted using a window size of 25 ms and a step size of 10ms, are fed into a stack of two CNN layers, with 128 and 512 filters with a filter width of 9 frames each.",
"In each CNN layer we stride with a factor of 2 along time, apply a ReLU activation (Nair and Hinton, 2010), and apply batch normalization (Ioffe and Szegedy, 2015).",
"The output of the CNN layers is fed into a three-layer bi-directional long short term memory network (LSTM; Hochreiter and Schmidhuber, 1997); each hidden layer has 512 dimensions.",
"Text decoder.",
"At each time step, the decoder chooses the most probable token from the output of a softmax layer produced by a fully-connected layer, which in turn receives the current state of a recurrent layer computed from previous time steps and an attention vector computed over the input.",
"Attention is computed using the global attentional model with general score function and input-feeding , as described in Luong et al. (2015).",
"The predicted token is then fed into a 128-dimensional embedding layer followed by a three-layer LSTM to update the recurrent state; each hidden state has 256 dimensions.",
"While training, we use the predicted token 20% of the time as input to the next decoder step and the training token for the remaining 80% of the time (Williams and Zipser, 1989).",
"At test time we use beam decoding with a beam size of 5 and length normalization (Wu et al., 2016) with a weight of 0.6.",
"Training and implementation.",
"Parameters for the CNN and RNN layers are initialized using the scheme from (He et al., 2015).",
"For the embedding and fully-connected layers, we use Chainer's (Tokui et al., 2015) default initialition.",
"We regularize using dropout (Srivastava et al., 2014), with a ratio of 0 .",
"3 over the embedding and LSTM layers (Gal, 2016), and a weight decay rate of 0 .",
"0001 .",
"The parameters are optimized using Adam (Kingma and Ba, 2015), with a starting alpha of 0.001.",
"Following some preliminary experimentation on our development set, we add Gaussian noise with standard deviation of 0.25 to the MFCC features during training, and drop frames with a probability of 0.10.",
"After 20 epochs, we corrupt the true decoder labels by sampling a random output label with a probability of 0.3.",
"Our code is implemented in Chainer (Tokui et al., 2015) and is freely available.",
"4 3.4 Evaluation Metrics.",
"We report BLEU (Papineni et al., 2002) for all our models.",
"5 In low-resource settings, BLEU scores tend to be low, difficult to interpret, and poorly correlated with model performance.",
"This is because BLEU requires exact four-gram matches only, but low four-gram accuracy may obscure a high unigram accuracy and inexact translations that partially capture the semantics of an utterance, and these can still be very useful in situations like language documentation and crisis response.",
"Therefore, we also report word-level unigram precision and recall, taking into account stem , synonym , and paraphrase matches.",
"To compute these scores, we use METEOR (Lavie and Agarwal, 2007) with default settings for English and French.",
"6 For example, METEOR assigns eat a recall of 1 against reference eat and a recall of 0.8 against reference feed, which it considers a synonym match.",
"Naive baselines.",
"We also include evaluation scores for a naive baseline model that predicts the K most frequent words of the training set as a bag of words for each test utterance.",
"We set K to be the value at which precision/recall are most similar, which is always between 5 and 20 words.",
"This provides an empirical lower bound on precision and recall, since we would expect any usable model to outperform a system that does not even depend on the input utterance.",
"We do not compute BLEU for these baselines, since they do not predict sequences, only bags of words.",
"Using the experimental setup of Section 3, we pre-trained ASR models in English and French, and report their word error rates (WER) on develop-4",
"ment data in Table 1.",
"7 We denote each ASR model by L-Nh , where L is a language code and N is the size of the training set in hours.",
"For example, en-300h denotes an English ASR model trained on 300 hours of data.",
"Training ASR models for state-of-the-art performance requires substantial hyper-parameter tuning and long training times.",
"Since our goal is simply to see whether pre-training is useful, we stopped pretraining our models after around 30 epochs (3 days) to focus on transfer experiments.",
"As a consequence, our ASR results are far from state-of-the-art: current end-to-end Kaldi systems obtain 16% WER on Switchboard train-dev , and 22.7% WER on the French Globalphone dev set.",
"8 We believe that better ASR pre-training may produce better ST results, but we leave this for future work.",
"In the following, we denote an ST model by S-T-Nh , where S and T are source and target language codes, and N is the size of the training set in hours.",
"For example, sp-en-20h denotes a Spanish-English ST model trained using 20 hours of data.",
"We use the code mb for Mboshi and fr for French.",
"Figure 2 shows the BLEU and unigram preci-sion/recall scores on the development set for baseline Spanish-English ST models and those trained after initializing with the en-300h model.",
"Corresponding results on the test set (Table 2) reveal very similar patterns.",
"The remainder of our analysis is confined to the development set.",
"The naive baseline, which predicts the 15 most frequent English words in the training set, achieves a precision/recall of around 20%, setting a performance lower bound.",
"7 We computed WER with the NIST sclite script.",
"8 These WER results taken from respective Kaldi recipes on GitHub, and may not represent the very best results on these data sets.",
"previous results (Bansal et al., 2018) using the same train/test splits, primarily due to better regularization and modeling of subwords rather than words.",
"Yet transfer learning still substantially improves over these strong baselines.",
"For sp-en-20h , transfer learning improves dev set BLEU from 10.8 to 19.9, precision from 41% to 51%, and recall from 38% to 49%.",
"For sp-en-50h , transfer learning improves BLEU from 23.3 to 27.8, precision from 54% to 58%, and recall from 51% to 56%.",
"Very low-resource: 10 hours or less of ST training data.",
"Figure 2 shows that without transfer learning, ST models trained on less than 10 hours of data struggle to learn, with precision/recall scores close to or below that of the naive baseline.",
"But with transfer learning, we see gains in precision and recall of between 10 and 20 points.",
"We also see that with transfer learning, a model trained on only 5 hours of ST data achieves a BLEU of 9.1, nearly as good as the 10.8 of a model trained on 20 hours of ST data without transfer learning.",
"In other words, fine-tuning an English ASR model which is relatively easy to obtainproduces similar results to training an ST model on four times as N = 0 2.5 5 10 20 50 base 0 2.1 1.8 2.1 10.8 22.7 +asr 0.5 5.7 9.1 14.5 20.2 28.2 Table 2: BLEU scores for Spanish-English ST on the Fisher test set, using N hours of training data.",
"We even find that in the very low-resource setting of just 2.5 hours of ST data, with transfer learning the model achieves a precision/recall of around 30% and improves by more than 10 points over the naive baseline.",
"In very low-resource scenarios with time constraintssuch as in disaster reliefit is possible that even this level of performance may be useful, since it can be used to spot keywords in speech and can be trained in just three hours.",
"Sample translations.",
"Table 3 shows example translations for models sp-en-20h and sp-en-50h with and without transfer learning using en-300h .",
"Figure 3 shows the attention weights for the last sample utterance in Table 3.",
"For this utterance, the Spanish and English text have a different word order: mucho tiempo occurs in the middle of the speech utterance, and its translation, long time , is at the end of the English reference.",
"Similarly, vive aqu occurs at the end of the speech utterance, while the translation, living here , is in the middle of the English reference.",
"The baseline sp-en-50h model translates the words correctly but doesn't get",
"the English word order right.",
"With transfer learning, the model produces a shorter but still accurate translation in the correct word order.",
"To understand the source of these improvements, we carried out a set of ablation experiments.",
"For most of these experiments, we focus on Spanish-English ST with 20 hours of training data, with and without transfer learning.",
"Transfer learning with selected parameters.",
"In our first set of experiments, we transferred all parameters of the en-300h model, including the speech encoder CNN and LSTM; the text decoder embedding, LSTM and output layer parameters; and attention parameters.",
"To see which set of parameters has the most impact, we train the sp-en-20h model by transferring only selected parameters from en-300h , and randomly initializing the rest.",
"The results (Figure 4) show that transferring all 1 10 20 30 40 50 60 training epochs 0 2 4 6 8 10 12 14 16 18 20 BLEU +asr:all +asr:enc +asr:dec +asr:cnn base Figure 4: Fisher development set training curves (reported using BLEU) for sp-en-20h using selected parameters from en-300h : none (base); encoder CNN only (+asr:cnn); encoder CNN and LSTM only (+asr:enc); decoder only (+asr:dec); and all: encoder, attention, and decoder (+asr:all).",
"parameters is most effective, and that the speech encoder parameters account for most of the gains.",
"We hypothesize that the encoder learns transferable low-level acoustic features that normalize across variability like speaker and channel differences to better capture meaningful phonetic differences, and that much of this learning is language-independent.",
"This hypothesis is supported by other work showing the benefits of cross-lingual and multilingual training for speech technology in low-resource target languages (Carlin et al., 2011; Jansen et al., 2010; Deng et al., 2013; Vu et al., 2012; Thomas et al., 2012; Cui et al., 2015; Alumae et al., 2016; Yuan et al., 2016; Renshaw et al., 2015; Hermann and Goldwater, 2018).",
"By contrast, transferring only decoder parameters does not improve accuracy.",
"Since decoder parameters help when used in tandem with encoder parameters, we suspect that the dependency in parameter training order might explain this: the transferred decoder parameters have been trained to expect particular input representations from the encoder, so transferring only the decoder parameters without the encoder might not be useful.",
"Figure 4 also suggests that models make strong gains early on in the training when using transfer learning.",
"The sp-en-20h model initialized with all model parameters ( +asr:all ) from en-300h reaches a higher BLEU score after just 5 epochs (2 hours) of training than the model without transfer learning trained for 60 epochs/20 hours.",
"This again can be useful in disaster-recovery scenarios, where the 0h 100h 300h # English ASR hours data used 036 91215 182124 2730 BLEU s p e n 20 h sp-en-50h Figure 5: Spanish-to-English BLEU scores on Fisher dev set, with 0h (no transfer learning), 100h and 300h of English ASR data used.",
"Amount of ASR data required.",
"Figure 5 shows the impact of increasing the amount of English ASR data used on Spanish-English ST performance for two models: sp-en-20h and sp-en-50h .",
"For sp-en-20h , we see that using en-100h improves performance by almost 6 BLEU points.",
"By using more English ASR training data ( en-300h ) model, the BLEU score increases by almost 9 points.",
"However, for sp-en-50h , we only see improvements when using en-300h .",
"This implies that transfer learning is most useful when only a few tens of hours of training data are available for ST. As the amount of ST training data increases, the benefits of transfer learning tail off, although it's possible that using even more monolingual data, or improving the training at the ASR step, could extend the benefits to larger ST data sets.",
"Impact of code-switching.",
"We also tried using the en-300h ASR model without any fine-tuning to translate Spanish audio to English text.",
"This model achieved a BLEU score of 1.1, with a precision of 15 and recall of 21.",
"The non-zero BLEU score indicates that the model is matching some 4-grams in the reference.",
"This seems to be due to code-switching in the Fisher-Spanish speech data set.",
"Looking at the dev set utterances, we find several examples where the Spanish transcriptions match the English translations, indicating that the speaker switched into English.",
"For example, there is an utterance whose Spanish transcription and English translation are both right yeah, and this English expression is indeed present in the source audio.",
"The English ASR model correctly translates this utterance, which is unsurprising since the phrase right yeah occurs nearly 500 times in Switchboard.",
"Overall, we find that in nearly 500 of the 4,000 development set utterances (14%), the Spanish transcription and English translations share more than half of their tokens, indicating likely code-switching.",
"This suggests that transfer learning from English ASR models might help more than from other languages.",
"To isolate this effect from transfer learning of language-independent speech features, we carried out a further experiment.",
"In this experiment, we pre-train using French ASR data for a Spanish-English translation task.",
"Here, we can only transfer the speech encoder parameters, and there should be little if any benefit due to code-switching.",
"Because our French data set (20 hours) is much smaller than our English one (300 hours), for a fair comparison we used a 20 hour subset of the English data for pre-training in this experiment.",
"For both the English and French models, we transferred only the encoder parameters.",
"Table 4 shows that both the English and French 20-hour pre-trained models improve performance on Spanish-English ST. The English model works slightly better, as would be predicted given our discussion of code-switching, but the French model is also useful, improving BLEU from 10.8 to 12.5.",
"This result strengthens the claim that ASR pretraining on a completely distinct third language can help low-resource ST. Presumably benefits would be much greater if we used a larger ASR data set, as we did with English above.",
"In this experiment, the French pre-trained model used a French BPE output vocabulary, distinct from the English BPE vocabulary used in the ST system.",
"In the future it would be interesting to try combining the French and English text to create a combined output vocabulary, which would allow transferring both the encoder and decoder parameters, and may be useful for translating names or cognates.",
"More generally, it would also be possible to pre-train on multiple languages simultaneously using a shared BPE vocabulary.",
"There is evidence that speech features trained on multiple languages transfer better than those trained on the same amount of data from a single language ( Hermann and Goldwater, 2018), so multilingual pretraining for ST could improve results.",
"Table 5 shows the ST model scores for Mboshi-French with and without using transfer learning.",
"The first two rows fr-top-8w , fr-top-10w , show precision and recall scores for the naive baselines where we predict the top 8 or 10 most frequent French words in the Mboshi-French training set.",
"These show that a precision/recall in the low 20s is easy to achieve, although with no n-gram matches (0 BLEU).",
"The pre-trained ASR models by themselves (next two lines) are much worse.",
"The baseline model trained only on ST data actually has lower precision/recall than the naive baseline, although its non-zero BLEU score indicates that it is able to correctly predict some n-grams.",
"We see comparable precision/recall to the naive baseline with improvements in BLEU by transferring either French ASR parameters (both encoder and decoder, fr-20h ) or English ASR parameters (encoder only, en-300h ).",
"Finally, to achieve the benefits of both the larger training set size for the encoder and the matching language of the decoder, we tried transferring the encoding parameters from the en-300h model and the decoding parameters from the fr-20h model.",
"This configuration ( en+fr ) gives us the best evaluation scores on all metrics, and highlights the flexi-bility of our framework.",
"Nevertheless, the 4-hour scenario is clearly a very challenging one.",
"This paper introduced the idea of pre-training an end-to-end speech translation system involving a low-resource language using ASR training data from a higher-resource language.",
"We showed that large gains are possible: for example, we achieved an improvement of 9 BLEU points for a Spanish-English ST model with 20 hours of parallel data and 300 hours of English ASR data.",
"Moreover, the pre-trained model trains faster than the baseline, achieving higher BLEU in only a couple of hours, while the baseline trains for more than a day.",
"We also showed that these methods can be used effectively on a real low-resource language, Mboshi, with only 4 hours of parallel data.",
"The very small size of the data set makes the task challenging, but by combining parameters from an English encoder and French decoder, we outperformed baseline models to obtain a BLEU score of 7.1 and precision/recall of about 25%.",
"We believe ours is the first paper to report word-level BLEU scores on this data set.",
"Our analysis indicates that, other things being equal, transferring both encoder and decoder parameters works better than just transferring one or the other.",
"However, transferring the encoder parameters is where most of the benefit comes from.",
"Pre-training using a large ASR corpus from a mismatched language will therefore probably work better than using a smaller ASR corpus that matches the output language.",
"Our analysis suggests several avenues for further exploration.",
"On the speech side, it might be even more effective to use multilingual training; or to replace the MFCC input features with pre-trained multilingual features, or features that are targeted to low-resource multispeaker settings (Kamper et al., 2015, 2017; Thomas et al., 2012; Cui et al., 2015; Yuan et al., 2016; Renshaw et al., 2015).",
"On the language modeling side, simply transferring decoder parameters from an ASR model did not work; it might work better to use pre-trained decoder parameters from a language model, as proposed by Ramachandran et al. (2017), or shallow fusion (Gulcehre et al., 2015; Toshniwal et al., 2018a), which interpolates a pre-trained language model during beam search.",
"In these methods, the decoder parameters are independent, and can therefore be used on their own.",
"We plan to explore these strategies in future work.",
"We would like to thank the anonymous reviewers for their valuable feedback.",
"This work was supported in part by a James S McDonnell Foundation Scholar Award, a Google faculty research award, and NSF grant 1816627.",
"We thank Ida Szubert and Clara Vania for helpful comments on previous drafts of this paper and Antonios Anastasopoulos for tips on experimental setup."
] | [
"other",
"result",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"objective",
"objective",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Gender bias is largely recognized as a problematic phenomenon affecting language technologies, with recent studies underscoring that it might surface differently across languages.",
"However, most of current evaluation practices adopt a word-level focus on a narrow set of occupational nouns under synthetic conditions.",
"Such protocols overlook key features of grammatical gender languages, which are characterized by morphosyntactic chains of gender agreement, marked on a variety of lexical items and parts-of-speech (POS).",
"To overcome this limitation, we enrich the natural, gender-sensitive MuST-SHE corpus (Ben-tivogli et al., 2020) with two new linguistic annotation layers (POS and agreement chains), and explore to what extent different lexical categories and agreement phenomena are impacted by gender skews.",
"Focusing on speech translation, we conduct a multifaceted evaluation on three language directions (English-French/Italian/Spanish), with models trained on varying amounts of data and different word segmentation techniques.",
"By shedding light on model behaviours, gender bias, and its detection at several levels of granularity, our findings emphasize the value of dedicated analyses beyond aggregated overall results.",
"As Matasovic (2004) posits: Gender is perhaps the only grammatical category that ever evoked passion and not only among linguists. That is because, in the case of human entities, masculine or feminine inflections are assigned semantically, i.e. in relation to the extra-linguistic reality of gender (Ackerman, 2019; Corbett, 1991, 2013).",
"Thus, gendered features interact with the sociocultural and political perception and representation of individuals (Gygax et al., 2019), by prompting discussions on the appropriate recognition of gender groups and their linguistic visibility (Stahlberg et al., 2007; Hellinger and Motschenbacher, 2015; Hord, 2016).",
"Such concerns also invested language technologies (Sun et al., 2019; Cao and Daum III, 2020), where it has been shown that automatic translation systems tend to over-represent masculine forms and amplify stereotypes when translating into grammatical gender languages (Savoldi et al., 2021).",
"Current evaluation practices for assessing gender bias in both Machine (MT) and Speech Translation (ST) commonly inspect such concerning behaviours by focusing only on a restricted set of occupational nouns (e.g. nurse , doctor ), and on synthetic benchmarks (Stanovsky et al., 2019; Escud Font and Costa-juss, 2019; Renduchintala et al., 2021).",
"Also, even when relying on lexically richer natural benchmarks, the designed metrics still work at the word level, treating all gender-marked words indiscriminately (Alhafni et al., 2020; Bentivogli et al., 2020).",
"Accordingly, current test sets and protocols:",
"i) do not allow us to inspect if and to what extent different word categories participate in gender bias,",
"ii) overlook the underlying morphosyntactic nature of grammatical gender on agreement chains, which cannot be monitored on single isolated words (e.g. en : a strange friend; it : una/o strana/o amica/o).",
"In fact, to be grammatically correct, each word in the chain has to be inflected with the same (masculine or feminine) gender form.",
"1 We believe that fine-grained evaluations including the analysis of gender agreement across different parts of speech (POS) are relevant not only to gain a deeper understanding of bias in grammatical gender languages, but also to inform mitigating strategies and data curation procedures.",
"Toward these goals, our contributions are as follows.",
"(1) We enrich MuST-SHE (Bentivogli et al., 2020) the only natural gender-sensitive benchmark available for MT and also ST with two layers of linguistic information: POS and agreement 1 For an analogy, consider the case of (lack of) number agreement in the following: *a dogs barks.",
"chains.",
"2 (2) In light of recent studies exploring how model design and overall perfomance interplay with gender bias (Roberts et al., 2020; Gaido et al., 2021), we rely on our manually curated resource to compare three ST models, which are trained on varying amounts of data, and built with different segmentation techniques: character and byte-pair-encoding (BPE) (Sennrich et al., 2016).",
"We carry out a multifaceted evaluation that includes automatic and extensive manual analyses on three language pairs (en-es, en-fr, en-it) and we consistently find that:",
"i) not all POS are equally impacted by gender bias;",
"ii) translating words in agreement does not emerge as a systematic issue;",
"iii) ST systems produce a considerable amount of neutral rewordings in lieu of gender-marked expressions, which current binary benchmarks fail to recognize.",
"Finally, in line with concurring studies, we find that",
"iv) character-based systems have an edge on translating gender phenomena, by favouring morphological and lexical diversity.",
"While research in Natural Language Processing (NLP) initially prioritized narrow technical interventions to address the social impact of language technologies, we are recently attesting a shift toward a more comprehensive understanding of bias (Shah et al., 2020; Blodgett et al., 2020).",
"Along this line, focus has been given to bias analysis in models' innards and outputs (Vig et al., 2020; Costa-juss et al., 2022), and to ascertain the validity of bias measurement practices (Blodgett et al., 2021; Antoniak and Mimno, 2021; Goldfarb-Tarrant et al., 2021).",
"Complementary mounting evidence suggests that rather than striving for generalizations gender bias detection ought to incorporate contextual and linguistic specificity (Gonzlez et al., 2020; Ciora et al., 2021; Matthews et al., 2021; Malik et al., 2021; Kurpicz-Briki and Leoni, 2021), which however receives little attention due to a heavy focus on English NLP (Bender and Friedman, 2018).",
"Purported agnostic approaches and evaluations (Bender, 2009) can prevent from drawing reliable conclusions and mitigating recommendations, as attested by monolingual studies on grammatical gender languages (Zhou et al., 2019; Gonen et al., 2019; Zmigrod et al., 2019) and in 2 The annotation layers are an extension of MuST-SHE v1.2 and are freely downloadable at: ict.fbk.eu/must-she/ under the same MuST-SHE licence (CC BY NC ND 4.0) Figure 1: Example of gender-mapping in translation from the parallel en-it portion of the natural MuST-SHE corpus.",
"Unlike English, grammatical gender languages exhibit an elaborate morphological and syntactic system, where gender is overtly marked on numerous POS (e.g., verbs, determiners, nouns), and related words have to agree on the same gender features (see Figure 1 for an example).",
"Still, current corpora and evaluation practices do not fully foreground systems' behaviour on such grammatical constraints.",
"WinoMT (Stanovsky et al., 2019) represents the standard corpus to evaluate gender bias in MT within an English-to-grammatical gender language scenario.",
"It has been progressively enriched with new features (Saunders et al., 2020; Kocmi et al., 2020), and adapted for ST (Costa-juss et al., 2020).",
"While this resource can be useful to diagnose gender stereotyping at scale, it excludes languages' peculiarities since it is built on the concatenation of two corpora designed for English monolingual tasks 3 WinoGender (Rudinger et al., 2018) and WinoBias (Zhao et al., 2018) which consist of synthetic sentences with the same structure and a pre-selected occupational lexicon (e.g. The lawyer yelled at the hairdresser because he did a bad job).",
"4 To increase variability, Troles and Schmid (2021) extend WinoBias by accompanying occupations with highly gender-stereotypical verbs 3 Gonzlez et al. (2020) note that the U.S. labor market statistics employed to define stereotypical associations are not always in line with other national gender statistics, thus they may impose an Anglo-centric frame for the detection of bias in other language scenarios.",
"4 Levy et al. (2021) recently created BUG on natural English data, but still it is limited to the evaluation of occupations.",
"and adjectives.",
"Their evaluation though, still only considers the translated professions as to verify if the co-occuring words might skew the models' assumptions.",
"However, gender-marking involves also several other, so far less accounted POS categories, but if they are just as problematic is not clear yet.",
"Existing bilingual (Alhafni et al., 2021), and multilingual (Bentivogli et al., 2020) natural benchmarks, instead, are manually curated as to identify a variety of gender phenomena specifically modeled on the accounted languages.",
"As a result, they maximize lexical and contextual variability to inspect whether translation models yield feminine under-representation in real-world-like scenarios (Savoldi et al., 2021).",
"However, since this variability is not mapped into fine-grained linguistic information, evaluations on such corpora do not single out which instances may be more responsible for gender bias.",
"Finally, by considering each word in isolation, they neglect the underlying features of gender agreement, which determine the grammatical acceptability of the translation.",
"To the best of our knowledge, only two works have currently interplayed issues of syntactic agreement and gender bias.",
"Renduchintala and Williams (2021) designed a set of English sentences involving a syntactic construction that requires to translate an occupational term according to its unequivocal gender trigger (e.g. that nurse is a funny man).",
"While they find that MT struggles even in such a simple setting, they only inspect the translation of a single disambiguated word ( nurse ) rather than a whole group of words in agreement.",
"Closer to our intent, Gaido et al. (2020) analyze the output of different ST systems and note that their models seem to wrongly pick divergent gender inflections for unrelated words in the same sentence (e.g. en : As a researcher, professor; fr : En tant que chercheuse F , professeur M ) but not for dependency-related ones (e.g. en : The classic Asian student; it : [La classica studentessa asiatica] F ).",
"Although limited in scope, their observation is worth being explored systematically.",
"We thus conduct the very first study that intersects POS, agreement, and gender bias.",
"In light of the above, a fine-grained evaluation of bias focused on POS and gender agreement requires the creation of a new dedicated resource.",
"Rather than building it from scratch, we add two annotation layers to the existing MuST-SHE bench-PARTS-OF-SPEECH",
"mark (Bentivogli et al., 2020), which is built on spoken language data retrieved from TED talks.",
"Available for en-es/fr/it, it represents the only multilingual MT and STGBET 5 exhibiting a natural variety of gender phenomena, which are balanced across feminine and masculine forms.",
"In the reference translations of the corpus, each target gender-marked word corresponding to a neutral expression in the English source is annotated with its alternative wrong gender form (e.g. en : the girl left ; it : la<il> ragazza an-data<andato> via ).",
"As further discussed in in Section 4.2, such a feature enables fine-grained analyses of gender realization, which can also disentangle systems' tendency to (over)generate masculine over feminine forms in translation.",
"MuST-SHE thus allows the identification and pinpointed evaluation of numerous and qualitatively different grammatical gender instances under authentic conditions.",
"Furthermore, the target languages covered in MuST-SHE (es, fr, it) are particularly suitable to focus on linguistic specificity.",
"As a matter of fact, as Gygax et al. (2019) suggest, accounting for gender in languages with similar typological features allows for proper comparisons.",
"6 5 Gender Bias Evaluation Testset (Sun et al., 2019).",
"6 We underscore that our dedicated resources and experiments intentionally account for the specificities of three (com-parable) grammatical gender languages.",
"Hence, we remain cautious of extending by default the results of our annotation and experiments to any other language.",
"Parts-Of-Speech.",
"We annotate each target gender-marked word in MuST-SHE with POS information.",
"As shown in Table 1 ( a-c ), we differentiate among six POS categories: 7",
"i) articles,",
"ii) pronouns, iii ) nouns, and iv ) verbs.",
"For adjectives, we further distinguish",
"v) limiting adjectives with mi-nor semantic import that determine e.g. possession, quantity, space ( my , some , this ); and",
"vi) descriptive adjectives that convey attributes and qualities, e.g. glad , exhausted .",
"This distinction enables to neatly sort our POS categories into the closed class of function words, or into the open one of content words (Schachter and Shopen, 2007).",
"Since words from these two classes differ substantially in terms of variability, frequency, and semantics, we reckon they represent a relevant variable to account for in the evaluation of gender bias.",
"Agreement.",
"We also enrich MuST-SHE with linguistic information that is relevant to investigate the morphosyntactic nature of grammatical gender agreement.",
"Gender agreement, or concord (Cor-bett, 2006; Comrie, 1999), requires that related words match the same gender form, as in the case of phrases , i.e. groups of words that constitute a single linguistic unit.",
"8 Thus, as shown in Table 1, we identify and annotate as agreement chains gender-marked words that constitute a phrase, such as a noun plus its modifiers ( d ), and verb phrases for compound tenses ( e ).",
"Also, structures that involve a gender-marked (semi-) copula verb and its predicative complement are annotated as chains ( f ), although in such cases the agreement constraint is weaker.",
"9 This annotation lets us verify whether a model consistently picks the same gender paradigm for all words in the chain, enabling the assessment of its syntagmatic behaviour.",
"POS and agreement annotation was manually carried out by 6 annotators (2 per language pair) undergoing a linguistics/translation studies MA degree, and with native/excellent proficiency in the assigned target language.",
"For each language pair, 7 Some POS categories (e.g. conjunctions, adverbs) are not considered since they are not subject to gender inflection.",
"matical e.g. es : *el M buen M nin F ( en: the good kid).",
"9 Such structure, due to the semantics of some linking verbs, can enable more flexibility.",
"E.g. in French, Elle est devenue F un M canard M ( She became a duck ) is grammatical, although un canard (a duck) is formally masculine.",
"they annotated the whole corpus independently, based on detailed guidelines (see Appendix A).",
"For POS, we computed inter-annotator agreement (IAA) on label assignment with the kappa coefficient (in Scott's formulation) (Scott, 1955).",
"The resulting values of 0.92 (en-es), 0.94 (en-fr) and 0.96 (en-it) correspond to almost perfect agreement according to its standard interpretation (Lan-dis and Koch, 1977).",
"For gender agreement, IAA was calculated on the exact match of the complete chains in the two annotations.",
"The resulting Dice coefficients (Dice, 1945) of 89.23% (en-es), 93.0% (en-fr), and 94.34% (en-it), can be considered highly satisfactory given the more complex nature of this latter task.",
"Except for few liminal cases that were excluded from the dataset, all disagreements were reconciled.",
"We show the final annotation statistics in Table 2. Variations across languages are due to inherently cross-lingual differences.",
"10 While their discussion is beyond the scope of this work, overall these figures underscore the so far largely unaccounted variability of gender across lexical categories.",
"Our experiments draw on studies exploring the relation between overall system performance, model size and gender bias.",
"Vig et al. (2020) posit that bias increases with model size as larger systems better emulate biased training data.",
"Working on WinoMT/ST, Kocmi et al. (2020) correlate higher BLEU scores and gender stereotyping, whereas Costa-juss et al. (2020) show that systems with lower performance tend to produce fewer feminine translations for occupations, but rely less on stereotypical cues.",
"To account for these findings and inspect the behavior of different models under natural conditions, we experiment with three end-to-end 10 Spanish, for instance, relies less than French or Italian on the gender-enforcing to be auxiliary, resulting in less gender-marked verbs ( fr : est parti/ie; it : partita/o; es : se ha ido).",
"ST solutions, namely: LARGE-BPE , SMALL-BPE and SMALL-CHAR (see Appendix B for complete details about the models and training setups).",
"Developed to achieve state-of-the-art performance, LARGE-BPE models rely on Transformer (Vaswani et al., 2017) and are trained in rich data conditions (1.25M ASR/ST utterances) by applying BPE segmentation (Sennrich et al., 2016).",
"To achieve high performance, we made use of:",
"i) all the available ST training corpora for the languages addressed, namely MuST-C (Cattoni et al., 2021) and Europarl-ST (Iranzo-Snchez et al., 2020);",
"ii) consolidated data augmentation methods (Nguyen et al., 2020; Park et al., 2019; Jia et al., 2019); and",
"iii) knowledge transfer techniques from ASR and MT, namely component pre-training and knowledge distillation (Weiss et al., 2017a; Bansal et al., 2019).",
"11 In terms of BLEU score 34.12 on en-es, 40.3 on en-fr, 27.7 on en-it our LARGE-BPE models compare favorably with recently published results on MuST-C test data (Le et al. 2021 12 and Bentivogli et al. 2021 13 ).",
"Also built with the same (Transformer-based) core technology, the other systems, SMALL-BPE and SMALL-CHAR , allow for apples-to-apples comparison between the different capabilities of BPE and character-level tokenization, namely:",
"i) the syntactic advantage of BPE in managing several agreement phenomena (Sennrich, 2017; Ataman et al., 2019), and",
"ii) the higher capability of character-level at generalizing morphology (Be-linkov et al., 2020).",
"Given the morphological and syntactic nature of gender, such differences make them enticing candidates for further analysis.",
"So far, Gaido et al. (2021) carried out the only study interplaying the two segmententation methods and gender bias, and found that in spite of lower overall performance character tokenization results in higher production of feminine forms for ST. By exploiting our new enriched resource, we intend to further test this finding and extend the analysis to gender agreement.",
"Thus, for the sake of comparison with (Gaido et al., 2021), we train these systems in the same (controlled) data conditions i.e. on the MuST-C corpus only.",
"11 We are aware that both MuST-C and Europarl-ST are characterized by a majority (70%) of masculine speakers (Gaido et al., 2020; Vanmassenhove et al., 2018).",
"Although comprehensive statistics are not available for the other ASR and MT training resources, we can reasonably assume they are similarly biased.",
"12 28.73 on en-es, 34.98 on en-fr, 24.96 on en-it.",
"13 32.93 on en-es, 28.56 on en-it.",
"We employ the enriched MuST-SHE corpus to assess generic performance and gender translation at several levels of granularity.",
"Evaluating gender translation under natural conditions grants the advantage of inspecting diverse informative phenomena.",
"Concurrently, however, the intrinsic variability of natural language can defy automatic approaches based on reference translations: Since language generation is an open-ended task, in our specific setting system's outputs may not contain the exact gender-marked words annotated in MuST-SHE.",
"In fact, the released MuST-SHE evaluation script (Gaido et al., 2020) first measures dataset coverage , i.e. the proportion of annotated words that are generated by the system, and on which gender translation is hence measurable.",
"Then, it calculates gender accuracy as the proportion of words generated in the correct gender among the measurable ones.",
"As a result, all the out of coverage words are necessarily left unevaluated.",
"For all word-level gender evaluations (Sections 5.1 and 5.2), we compute accuracy as in the official MuST-SHE script and include scores based on the POS annotations.",
"Instead, for chain-level gender agreement evaluation (Section 6.1) we modified the original script to process full agreement chains instead of single words.",
"14 Finally, since we aim at gaining qualitative insights into systems' behaviour, and at ensuring a sound and thorough multifaceted evaluation, we overcome the described coverage limitation of the automatic evaluation by complementing it with a manual analysis of all the gender-marked words and agreement chains that remained out of coverage.",
"This extensive manual evaluation was carried out via a systematic annotation of systems' outputs, performed by the same linguists that enriched MuST-SHE, who provided the appropriate knowledge of both the resource and the evaluation task.",
"Accordingly, we manage to make our study completely exhaustive by covering every gender-marked instance of MuST-SHE.",
"Also, such additional manual evaluation serves as a proof-of-concept to ensure the validity of the employed automatic evaluation metrics.",
"14 The scripts are released together with the MuST-SHE annotated extensions.",
"Table 3 presents SacreBLEU (Post, 2018), 15 coverage, and gender accuracy scores on the MuST-SHE test sets.",
"All language directions exhibit a consistent trend: LARGE-BPE systems unsurprisingly achieve by far the highest overall translation quality.",
"Also, in line with previous analyses (Di Gangi et al., 2020), SMALL-BPE models outperform the CHAR ones by 1 BLEU point.",
"The higher overall translation quality of LARGE-BPE models is also reflected by the coverage scores (All-Cov), where they generate the highest number of MuST-SHE gender-marked words for all language pairs.",
"By turning to overall gender accuracy (All-Acc) though, the edge previously assessed for the bigger state-of-the-art systems ceases to be clear-cut.",
"For en-es and en-fr, LARGE-BPE systems outperform the concurring SMALL-CHAR by 2 points only a slim advantage compared to the large gap observed on BLEU score.",
"Moreover, for en-it, SMALL-CHAR proves the best at translating gender.",
"We further zoom into the comparison of gender translation for feminine (F-Acc) and masculine (M-Acc) forms, where we can immediately assess that all ST models are skewed toward a disproportionate production of masculine forms (on average, 53.1% for F vs. 81.3% for M).",
"However, focusing on LARGE-BPE models, we discover that their higher global gender accuracy (All-Acc) is actually due to the higher generation of masculine forms, while they do not compare favorably when it comes to feminine translation.",
"In fact, in spite of achieving the lowest generic translation quality, SMALL-CHAR prove on par (for en-es) or even better (for en-it and en-fr) than LARGE-BPE models at handling feminine gender translation.",
"tic metrics, are able to disentangle gender phenomena.",
"As such, we can confirm that higher generic performance does not entail a superior capacity of producing feminine gender.",
"This does not only emerge, as per Gaido et al. (2021), in the comparison of (small) BPEand char-based ST models.",
"Rather, even for stronger systems, we attest how profiting from a wealth of uncurated and synthetic (Bender et al., 2021) data does not grant advantages to address gender bias.",
"This motivates us to continue our multifaceted evaluation by taking into account only small models henceforth CHAR and BPE that, being trained on the same MuST-C data, allow for sound and transparent comparison.",
"At a finer level of granularity, we use our extension of MuST-SHE to inspect gender bias across open and closed class words.",
"Their coverage ranges between 74-81% for function words, but it shrinks to 44-59% for content words (see Appendix C.1).",
"This is expected given the limited variability and high frequency of functional items in language.",
"Instead, the coverage of feminine and masculine forms is on par within each class for all systems, thus allowing us to evaluate gender accuracy on a comparable proportion of generated words.",
"A bird's-eye view of Figure 2 attests that, although masculine forms are always disproportionately produced, the gender accuracy gap is amplified on the open class words.",
"The consistency of such a behaviour across languages and systems suggests that content words are involved to a greater extent in 1812 gender bias.",
"We hence analyse this more problematic class by looking into a breakdown of the results per POS, while for function words' gender accuracy we refer to Appendix C.2.",
"Table 4 presents results for verbs , nouns and descriptive adjectives .",
"First, in terms of system capability, CHAR still consistently emerge as the favorite models for feminine translation.",
"What we find notable, though, is that even within the same class we observe evident fluctuations, where nouns come forth as the most biased POS with a huge divide between M and F accuracy (5277 points).",
"Specifically, scores below 50% indicate that feminine forms are generated with a probability that is below random choice, thus signalling an extremely strong bias.",
"In light of this finding, we hypothesize that semantic and distributional features might be a factor to interpret words' gender skew.",
"Specifically, occupational lexicon (e.g. lawyer, professor) makes up for most of the nouns represented in MuST-SHE ( 70%).",
"While such a high rate of professions in TED data is not surprising per se , 16 it singles out that professions may actually represent a category where systems largely rely on spurious cues to perform gender translation, even within natural conditions that do not ambiguously prompt stereotyping.",
"We exclude basic token frequency by POS as a key factor to interpret our results, as MuST-SHE feminine nouns do not consistently appear as the POS with the lowest number of occurrences, nor do they have the lowest F:M ratio within MuST-C training data.",
"As discussed in Section 8, we believe that our breakdown per POS is informative inasmuch it prompts qualitative considerations on how to pursue gender bias mitigation in models and corpora (Czarnowska et al., 2021; Doughman et al., 2021).",
"We manually inspect CHAR and BPE system's output on the out-of-coverage (OOC) words that could not be automatically evaluated (see All-Cov column in Table 3), which amount to more than 5,000 instances.",
"As shown in Table 5, our analysis discerns between OOC words due to",
"i) translation errors (Err), 17 and",
"ii) adequate alternative translations (i.e. meaning equivalent) for the expected gender-marked words.",
"Such alternatives comprise instances in which word omission is acceptable 16 As TED talks are held by field experts, references to education and titles are quite common (MacKrill et al., 2021).",
"(Alt-O) (Baker, 1992), and rewordings through synonyms or paraphrases.",
"Since our focus remains on gender translation, we distinguish when such rewordings are generated with correct (Alt-C) or wrong (Alt-W) gender inflections, as well as neutral expressions devoid of gender-marking (Alt-N).",
"Note that with respect to English (Cao and Daum III, 2020; Vanmassenhove et al., 2021; Sun et al., 2021) overcoming the structural pervasiveness of gender specifications in grammatical gender languages is extremely challenging (Gabriel et al., 2018a), but some rewordings can enable indirect neutral language (INL) 18 (Lpez, 2020).",
"The results of the analysis are shown in Figure 3. Surprisingly, we find that BPE models in spite of their higher BLEU scores accumulate more translation errors than their CHAR counterparts.",
"19 Conversely, CHAR models generate an overall higher proportion of alternatives and, more importantly, alternatives whose gender translation is acceptable (-N, -C).",
"This suggests that CHAR output is characterized by a favourable adequate variability that 18 INL relies on generic expressions rather than gender-specific ones (e.g. service vs. waiter/tress ) See Section 8.",
"conveys both lexical meaning and gender realization better than BPE .",
"Also, note that the outcome of the manual analyses reiterates the results obtained with the automatic evaluation based on accuracy at the word-level, thus confirming its reliability.",
"As a final remark, we find that all systems produce a considerable amount of neutral alternatives in their outputs.",
"To gain insight into such neutralizations, we audit on which POS they are realized.",
"Accordingly, we find that neutralizations of adjectives and nouns are quite limited, and concern the production of epicene synonyms (e.g. en : happy; es-ref : contento/a; es-out : feliz).",
"Verbs, instead, are largely implicated in the phenomenon, since inflectional changes in tense and aspect paradigms (e.g., present, imperfective) that do not convey gender distinctions are feasible (see the -N example in Table 5).",
"Such range of alternatives for verbs is in fact also reflected by its lowest coverage among all POS (as low as 32%).",
"Finally, paraphrases based on verbs also represent the most frequent way to neutralize other POS in the output.",
"Since such expressions are suitable, or even preferable, for several scenarios (e.g. to substitute masculine generics, to avoid making unlicensed gender assumptions) our finding encourages the creation of test sets accounting for such a third viable direction, and can shed light on systems' potential to produce INL alternatives.",
"The final step in our multifaceted analysis goes beyond the word level to inspect agreement chains in translation.",
"To this aim, we define coverage as the proportion of generated chains matching with those annotated in MuST-SHE.",
"Then, the accuracy of the generated chains accounts for 3 different cases where:",
"i) agreement is respected, and with the correct gender (C);",
"ii) agreement is respected, but with the wrong gender (W); and",
"iii) both feminine and masculine gender inflections occur together, and thus agreement is not respected (NO).",
"Table 6 shows accuracy scores for all MuST-SHE agreement chains (All), also split into feminine (F) and masculine (M) chains.",
"The overall results are promising: we find very few instances (literally 1 or 2) in which ST systems produce an ungrammatical output that breaks gender agreement (NO).",
"In fact, both systems tend to be consistent with one picked gender for the whole de-All Feminine Masculine C W NO C W NO C W NO en-es bpe 74.3 24.6 1.2 33.9 64.4 1.7 95.5 3.6 0.9 char 78.4 21.0 0.6 42.4 57.6 0.0 96.6 2.6 0.9 en-fr bpe 67.9 31.0 1.2 54.1 45.9 0.0 78.7 19.1 2.1 char 76.7 22.3 1.0 57.5 40.0 2.5 88.9 11.1 0.0 en-it bpe 71.7 27.5 0.7 47.4 50.9 1.8 88.9 11.1 0.0 char 78.5 20.0 1.5 54.2 44.1 1.7 97.4 1.3 1.3 Table 6: Agreement results for All chains matched in MuST-SHE, and split into Feminine and Masculine chains.",
"pendency group.",
"Thus, in spite of previous MT studies concluding that character-based segmentation results in poorer syntactic capability (Belinkov et al., 2020), respecting concord does not appear as an issue for any of our small ST models.",
"For the sake of comparability, however, we note that our evaluation involves language pairs that do not widely resort to long-range dependencies; this may contribute to explaining why CHAR better handles correct gender agreement.",
"20 Overall, agreement translation was measured on a lower coverage (30-50%) presented in Appendix D.1 than the word-level one (Section 3).",
"While this is expected given the strict requirement of generating full chains with several words, we recover such a loss by means of the comprehensive manual evaluation discussed below.",
"Our manual inspection recovers a total of 1,200 OOC agreement chains from CHAR and BPE output.",
"Similarly to the approach employed for single words (Section 5.3), we discern between OOC chains due to:",
"i) translation errors (Err), and",
"ii) alternative translations preserving the source meaning.",
"We distinguish different types of alternatives.",
"First, alternatives that do no exhibit a morphosyntactic agreement phenomenon to be judged, as in the case of neutral paraphrases or rewordings consisting of a single word (NO-chain).",
"Instead, when the generated alternative chain exhibits gender markings, we distinguish if the chosen gender is correct (C), wrong (W), or if the system produces a chain that does not respect gender agreement because it combines both feminine and masculine gender inflections (NO).",
"20 Due to space constraints we refer to Appendix D.2 for an analysis of longer-range cases of subject-verb agreement.",
"is presented in Figure 4.",
"Interestingly, such results are only partially corroborating previous analyses.",
"On the one hand, unlike the OOC words' results discussed in Section 5.3, we attest that CHAR models produce the highest proportion of translation errors.",
"Thus, it seems that CHAR capability in producing adequate alternatives is confined to the single-word level, whereas it exhibits a higher failure rate on longer sequences.",
"On the other hand, by looking at alternative chains, CHAR still emerges as the best at properly translating gender agreement, with the highest proportion of chains with correct gender (C), and the lowest one with wrong gender (W).",
"Finally, again in line with our automatic evaluation (Table 6), we confirm that respecting agreement is not an issue for our ST models: we identify only 3 cases (2 for en-fr BPE , 1 for en-fr CHAR ) where concord is broken (NO).",
"Given the rarity of such instances, we are not able to draw definitive conclusions on the nature of these outliers.",
"Nonetheless, we check the instances in which agreement was not respected (both in and out of coverage).",
"We see that cases of broken concord also concern extremely simple phrases, consisting of a noun and its modifier (e.g. en: talking to [this inventor],...because he ; fr: parler [cette F inventeur M ] ..., parce qu' il ).",
"However, the most common type among these outliers are constructions with semi-copula verbs (e.g. en: She... [became a vet] ; it: ...E' [ diventata F un M veterinatrio M ] ), which as discussed in Section 3.1 exhibit a weaker agreement constraint.",
"The complex system of grammatical gender languages entails several morphosyntactic implications for different lexical categories.",
"In this paper, we underscored such implications and explored how different POS and grammatical agreement are involved in gender bias.",
"To this aim, we enriched the MuST-SHE benchmark with new linguistic information, and carried out an extensive evaluation on the behaviour of ST models built with different segmentation techniques and data quantities.",
"On three language pairs (English-French/Italian/Spanish), our study shows that, while all POS are subject to masculine skews, they are not impacted to the same extent.",
"Respecting gender agreement for the translation of related words, instead, is not an issue for current ST models.",
"We also find that ST generates a considerable amount of neutral expressions, suitable to replace gender-inflected ones, which however current test sets do not recognize.",
"Overall, our work reiterates the importance of dedicated analyses that, unlike holistic metrics, can single out system's behaviour on gender phenomena.",
"Accordingly, our results are in line with previous studies showing that, in spite of lower generic performance, character-based segmentation exhibits a better capability at handling feminine translation at different levels of granularity.",
"As our MuST-SHE extension is available for both ST and MT, we invite MT studies to start from our discoveries and resource.",
"We would like to thank the 2021 Summer Internship students at FBK for their contribution: Francesco Fernicola, Sara Giuliani, Lorena Rocio Martn, Silvia Alma Piazzolla, Mlanie Prati, Jana Waldmann.",
"This work was made possible thanks to their extensive annotation work and active participation in fruitful discussions.",
"In this paper, we evaluate whether and to what extent ST models exhibit biased behaviors by systematically and disproportionately favoring masculine forms in translation.",
"Such a behavior is problematic inasmuch it leads to under-representational harms by reducing feminine visibility (Blodgett et al., 2020; Savoldi et al., 2021).",
"Broader impact.",
"While the focus of this work is on the analysis itself, our insights prompt broader considerations.",
"Specifically, our investigation on the relation between data size/segmentation technique and gender bias provides initial cues on which models and components to audit and implement toward the goal of reducing gender bias.",
"This, in particular, may be informative to define the path for emerging 1815 direct ST technologies.",
"Also, our results disaggregated by POS invite reflections on how to intend and mitigate bias by means of interventions on the training data.",
"In fact, while it is known that the MuST-C corpus (Cattoni et al., 2021) used for training comprises a majority of masculine speakers, 21 the fact that certain lexical categories are more biased than others suggests that, on top of more coarse-grained quantitative attempts at gender balancing (Costa-juss and de Jorge, 2020), data curation ought to account for more sensitive, nuanced, and qualitative asymmetries.",
"These also imply how , rather than only how often , gender groups are represented (Wagner et al., 2015; Devinney et al., 2020).",
"Also, while nouns come forth as the most problematic POS, current practices of data augmentation based on a pre-defined occupational lexicon may address stereotyping (Saunders and Byrne, 2020), but do not increase the production of other nonetheless skewed lexical categories.",
"Overall, our enriched resource 22 can be useful to monitor the validity of different technical interventions.",
"Ethic statement.",
"The use of gender as a variable (Larson, 2017) warrants some ethical reflections.",
"Our evaluation on the MuST-SHE benchmark exclusively accounts for linguistic gender expressions.",
"As reported in MuST-SHE data statement (Bender and Friedman, 2018), 23 also for the subset of sentences that contain first-person references 24 (e.g. I'm a student ), speakers' gender information is manually annotated based on the personal pronouns found in their publicly available personal TED profile, and used to check that the indicated (English) linguistic gender forms are rendered in the gold standard translations.",
"While our experiments are limited to the binary linguistic forms represented in the used data, to the best of our knowledge, ST natural language corpora going beyond binarism do not yet exist.",
"25 This is also due to the fact that unlike English which finds itself for several cultural and linguistic reasons as a leader of change toward inclusive forms (Ackerman, 2019) Direct Nonbinary Language based on neomorphemes (Shroy, 21 https://ict.fbk.eu/must-speakers/ 22 It will be released under the same CC BY NC ND 4.0 International license as MuST-SHE.",
"Category 1 in the corpus.",
"25 Saunders et al. (2020) enriched WinoMT to account for non-binary language.",
"While it is only available for MT, such annotations consist of placeholders for neutrality rather than actual non-binary expressions.",
"2016; Papadopoulos, 2019; Knisely, 2020) is nontrivial to fully implement in grammatical gender languages (Hellinger and Buman, 2001; Gabriel et al., 2018b) and still object of experimentation (Redazione, 2020; Attig and Lpez, 2020).",
"However, our manual evaluation expands to the possibility of INL strategies that could be detected in system's output.",
"We underscore that such strategies are recommended and fruitful to avoid the gendering of referents, but are to be considered as concurring to rather than replacements of emerging linguistic innovations (Lpez, 2020).",
"Lastly, we signal that direct ST models may leverage speakers' vocal characteristics as a gender cue to infer gender translation.",
"Although the potential risks of such condition do not emerge and are not addressed in our setting (focused on POS and agreement features as a variable), we endorse the point made by Gaido et al. (2020).",
"Namely, direct ST systems leveraging speaker's vocal biometric features as a gender cue can entail real-world dangers, like the categorization of individuals by means of biological essentialist frameworks (Zim-man, 2020).",
"This can reduce gender to stereotypical expectations about how masculine or feminine voices should sound, and can be especially harmful to transgender individuals, as it can lead to misgen-dering (Stryker, 2008) and invalidation.",
"Note that we experimented with unmodified models for the sake of hypothesis testing without adding variability, but real-world deployment of ST technologies must account for the potential harms arising form the use of direct ST technologies as is ."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method"
] |
[
"Human communication is a collaborative process.",
"Speakers, on top of conveying their own intent, adjust the content and language expressions by taking the listeners into account, including their knowledge background, personalities, and physical capabilities.",
"Towards building AI agents with similar abilities in language communication, we propose Pragmatic Rational Speaker (PRS), a framework extending Rational Speech Act (RSA).",
"The PRS attempts to learn the speaker-listener disparity and adjust the speech accordingly, by adding a light-weighted disparity adjustment layer into working memory on top of speaker's long-term memory system.",
"By fixing the long-term memory, the PRS only needs to update its working memory to learn and adapt to different types of listeners.",
"To validate our framework, we create a dataset that simulates different types of speaker-listener disparities in the context of referential games.",
"Our empirical results demonstrate that the PRS is able to shift its output towards the language that listeners are able to understand, significantly improve the collaborative task outcome.",
"In human communication, speakers often adjust their language production by taking into consideration listeners' personality, background knowledge, perceptual or physical capabilities etc (Clark, 1996).",
"Recent years have seen an increasing amount of work that explores pragmatic reasoning based on Rational Speech Act (RSA) (Andreas and Klein, 2016; Fried et al., 2018a,b; White et al., 2020; Cohn-Gordon et al., 2018), multi-agent emergent communication framework (Lazaridou et al., 2020; Lazaridou and Baroni, 2020), and Theory of Mind in communication (Bara et al., 2021; Zhu et al., 2021).",
"However, except for (Zhu et al., 2021), Work done during undergraduate study at the University of Michigan.",
"most previous works assume that the listeners and the speakers have the same background knowledge and capabilities, including vocabulary size, visual access, and relative locations.",
"This assumption is a great simplification of real-world communication where speakers and listeners often have various types of disparities.",
"To address this limitation, this paper extends the Rational Speech Act (RSA) (Frank and Goodman, 2012) model towards rational agents learning to adapt behaviors based on their experience with the listener.",
"The design choice of our model is inspired by the human cognitive system (Cowan, 2008; Wardlow, 2013) where a limited capacity working memory is built on top of the long-term memory to adjust the output to be task and environment specific.",
"Each communication is a modification on the long-term memory (Reed, 2012) with situation-specific factors.",
"In our framework, we fix the long-term memory which captures lan-2829 guage structure for communication, and introduce a light-weighted working memory (Miyake and Shah, 1999) for the Pragmatic Rational Speaker to modify and accommodate two goals: 1) a task goal which retrieves relevant information from the long-term memory and accomplish the task, and 2) a disparity goal which learns and adjusts the conversation to accommodate the listener's disparity through reinforcement learning.",
"We separate each component as they are independent of each other in utility, and can be easily switched and adapted for new tasks and new environment.",
"Different from previous works which only demonstrate how learned models affect task performance (e.g. (Shridhar et al., 2020; Zhu et al., 2021; Corona et al., 2019)), one of our goals is to also provide transparency on what models have indeed learned towards the end goal.",
"It's well established that end-to-end neural models can often take advantage of spurious data bias to gain end performance.",
"Models that only report end measure without showing their internal works would not be sufficient to tell the whole story about model's abilities.",
"To serve this goal, we situated our investigation in the context of a referential game 1 as shown in Figure 1. We carefully curated a dataset to simulate two types of disparity: knowledge disparity and perceptual disparity .",
"Our empirical results demonstrate that our model is able to significantly improve the collaborative game performance by shifting communication towards the language that the listeners with disparities are able to understand.",
"In addition, our results show that separating working memory from long-term memory leads to faster learning and better performance than the previous model which conducted joint end-to-end learning.",
"Our contributions are the following.",
"1) Following human cognition, we demonstrate the benefits of separating working memory from the long-term memory, compared to end-to-end joint training.",
"2) We propose a new dataset to simulate multiple distinct types of disparities, and demonstrate the pragmatic adaptability of our model.",
"3) Instead of focusing on mere end task performance, we show model's strong language shift ability to accommodate listener's disparities.",
"1 Different from traditional referential ground work as (Liu et al., 2013; Gorniak and Roy, 2004; Siebert and Schlangen, 2008; DeVault et al., 2005; Liu et al., 2012), we adopted this term from a recent line of work (Lazaridou et al., 2020; Andreas and Klein, 2016) to refer to the task described in Figure 1. The dataset and code are available through https://github.com/sled-group/ Pragmatic-Rational-Speaker to facilitate future work on pragmatics and theory of mind in language interpretation and generation.",
"It has been studied (Leung et al., 2021; Stephens et al., 2010; Wardlow, 2013) in psychology that human speakers adjust the way how we speak for successful communication after learning the listener's disparity.",
"Some recent work (Zarrie and Schlangen, 2019; Zhu et al., 2021; Corona et al., 2019; Hawkins et al., 2021) attempt to address similar questions.",
"We build our model upon the following two concepts.",
"The Rational Speech Act (RSA) model (Frank and Goodman, 2012) is a probabilistic model for the speakers and listeners to pragmatically reason about each other's intention.",
"In the context of a referential game (Monroe and Potts, 2015), for example (Figure 1), given an image m , it starts with a literal speaker S 0 to generate caption c : PS 0 ( c | m ) .",
"A rational listener L 1 reasons about the literal speaker's ( S 0 ) strategy and picks the best image that matches the description.",
"A rational speaker S 1 then takes the rational listener's ( L 1 ) strategy into account and produces a caption c that maximizes the collaborative game goal.",
"In previous work (Andreas and Klein, 2016) and (Lazaridou et al., 2020; Lazaridou and Baroni, 2020), the same referential game setup was used to propose a rational speaker that learns to reason the collaborative game and to produce natural sounding image captions based on RSA.",
"However, they were mainly addressing the task goal , assuming the speaker and listener have the exact same capabilities and knowledge background, which is unrealistic.",
"In our work, we created listeners with disparity d and extend this model for the speaker to accommodate both the task and disparities goals.",
"Working memory (also short-term memory) is used in neuropsychology and cognitive science (Cowan,",
"2008; Miyake and Shah, 1999) to refer to the memory that controls attention, plans and carries out behavior.",
"It is a combination of multiple components, including the contribution of long-term memory (Reed, 2012; Sawangjit et al., 2018) and situation-specific task processing (Funahashi, 2017).",
"The classical artificial intelligence work such as ACT (Heise and Westermann, 1989) and SOAR (Laird et al., 1987) also incorporated the concept of working memory to model human short-term memory.",
"The similar concept has been used in recent work such as (Hermann et al., 2017; Hill et al., 2017).",
"Our work is a novel application of the working memory to pragmatically adjust communication for speaker-listener disparities ( disparity goal), and take advantage of the internal simulation architecture to achieve the task goal.",
"Similar to (Kottur et al., 2017; Lazaridou et al., 2020), our model learns to converge language to adapt to listener's disparities through interactions, instead of ground truth supervision on language generation.",
"The speakers have zero prior knowledge on the listener's background nor an oracle access to probe the listener's brain.",
"Different from previous works, our model is able to generalize to distinct types of disparities.",
"In addition, while previous models were trained in an end-to-end joint fashion, our work separates training and demonstrates the efficiency of working memory.",
"Most importantly, few of the previous work were able to showcase model's language capabilities and only evaluate them by the end performance (e.g. accuracy), whereas our work emphasizes on evaluating how well the models learn to shift the language towards better understanding.",
"There are many levels of disparities during verbal communication (Stephens et al., 2010), including phonetic, lexical, grammatical, semantic representations, etc.",
"In our work, we assembled two datasets, and challenge the speaker model to handle two types of disparities: 1) knowledge disparity, and 2) perceptual disparity.",
"The knowledge disparity is simulated through the hypernym dataset, where the listener only understands the hypernym for all the objects (e.g. food instead of pizza), whereas the speaker understands both.",
"This dataset challenges the speaker model at the lexical level to learn what listener's vocab limitation, and shift towards the words that they understand.",
"The perceptual disparity is simulated through the limited visual dataset, where the listener has impaired vision or some objects were physically blocked from the eyesight.",
"This dataset challenges the speaker to shift attention and pick the visible objects for the listener to describe.",
"For control and demonstration purposes, we remove all the animal-related objects and words from listener's training.",
"These datasets are used to simulate listener's disparities and train the listener's model as described in Section 4.2.",
"The speaker's long term memory was trained with the original data which has full knowledge of the vocab and objects, but no idea what the listeners are or aren't capable of.",
"Detailed dataset components can be found in the Appendix.",
"We modified the Abstract Scenes (Gilberto Mateos Ortiz et al., 2015) dataset for our experiments.",
"There are 10020 images, each including 3 ground truth captions, and a median of 6 to 7 objects.",
"We assembled 35k pairs of images that differ by 4 objects as the Hard set, 25k pairs that differ by > 4 objects as the Easy set, and together as the Combined set.",
"The image pairs were split into training, validation and testing by a ratio of 8:1:1.",
"Given a pair of images m 0 , m 1 , the target image indicator t { 0 , 1 } , and the listener's disparity d , the speaker generates a caption c for the target image m t , and the listener needs to pick out the correct target t given c .",
"Both receive a reward of +1 upon correct choice, and 1 otherwise.",
"we start by building the Literal Speaker S 0 , gradually increase model structure and functionality with the vanilla Rational Speaker S 1 and the Pragmatic Rational Speaker S d1 .",
"Upon retrieving a list of candidate captions C from the long-term memory, the final goal for S d1 is to output the best caption c in the working memory, that accommodates both 1) task goal: describes the unique features of the target image, and 2) disparity goal: learns and accommodates the listener's disparity.",
"Table 1 is a brief summary of each model.",
"The Literal Speaker S 0 generates candidate captions c for a given image m (Eq 1), which serves as the long-term memory.",
"The Rational Listener L 1 picks out an image as the target given speaker's description (Eq 2).",
"The vanilla Rational Speaker S 1 achieves the task goal by simulating the listener's mind internally in its working memory (Eq 3).",
"L d 1 incorporates disparity to the Rational Listener.",
"The Pragmatic Rational Speaker S d1 adds a light-weight disparity adjustment layer (Eq 5) to learn and accommodate listener's disparity through interactions, and achieves both goals.",
"Each component can be easily switched and adapted to new tasks or environment.",
"L d 1 : P ( t | m 0 , m 1 , c, d ) PS 1 ( c | m 0 , m 1 , t, d ) P ( t | m 0 , m 1 , d ) (4) S d1 : P ( c | m 0 , m 1 , t, d ) PL d 1 ( t | m 0 , m 1 , c, d ) P ( c | m 0 , m 1 , d ) (5)",
"The Literal Speaker S 0 (Figure 2) is an object detection based image captioning module that generates caption candidates for the target image.",
"o 1 , . . . , o k , b 1 , . . . , b k = ObjDet ( m t ) e 1 , . . . , e k = WordEmb ( o 1 , . . . , o k ) c 1 , . . . , c n = Transformer ( e 1 , . . . , b 1 , . . . ) (6) For a given target image m t , since it's important to ground words to the scenes in order to control the disparities in vocabularies, we applied the object detector YOLO3 (Redmon and Farhadi, 2018) to extract a list of k detected objects O = { o 1 , o 2 , . . . , o k } , and their corresponding bounding boxes B = { b 1 , b 2 , . . . , b k } .",
"Each image chooses at most max _ obj = 9 detected objects, and the names of each were embedded with a pre-trained BERT (Devlin et al., 2019) word embedding E = { e 1 , e 2 , . . . , e k } .",
"These embed-dings are then concatenated with their bounding box locations, and sent to the Transformer Decoder to generate beam _ size = 30 candidate captions C = { c 1 , c 2 , . . . , c n } for each target image.",
"Without disparity concerns, the Rational Listener picks out the image that they believe is the target.",
"g 0 = FT _ Transformer ( m 0 , c ) g 1 = FT _ Transformer ( m 1 , c ) t = argmax i { 0 , 1 } CosSim ( g i , c ) (7) Recall that S 0 used a Transformer decoder to connect the image and its corresponding captions.",
"We reuse the same Fixed pre-trained Training-mode Transformer module (named FT _ Transformer ) to decide which image does the caption ground better in.",
"Adopting the idea of teacher-forcing language training, the output ( g i ) of FT _ Transformer with an input pair ( m i , c ) should closely resemble the original input c if the input image m i is indeed the one used to generate the caption c .",
"By calculating the co-sine similarity of each ( g i , c ) pair, the image that grounds better (higher CosSim ) in the description would be chosen as the target.",
"This module allows the agents to quickly and accurately make the decisions without further training.",
"In theory, if the speaker and the listener were to have the exact same brain (same model and weights), the performance of this task should approach 100%.",
"The results of No Disparity speaker in Figure 3 confirmed the design choice.",
"Without disparity concerns, the Rational Speaker ( S 1 ) fulfills the task goal by simulating (Figure 2) the Rational Listener ( L 1 )'s behavior, and rank the candidate captions generated by the Literal Speaker ( S 0 ) according to how well they can describe the target image apart from the distractors.",
"This design is under the fair assumption that both speakers and listeners are aware of the collaborative game goal, but can be switched for other task purposes.",
"For i { 0 , , n } , where n = | C | : t i , p i = Simulate _ L 1 ( m 0 , m 1 , c i ) c = c argmax i [[ t i == t ]] p i (8) Given an image pair ( m 0 , m 1 ), and a list of candidate captions C = { c 1 , , c n } generated by S 0 , the Rational Speaker goes through each caption c i and simulates how well the listener ( Simulate _ L 1 ) would pick out the correct target image.",
"If a candidate caption c i helps the simulator pick out the correct target image (i.e. t i == t ) with high confidence ( p i ), then it will be chosen as the final caption sent over to the actual listener.",
"The simulated listener shares the same architecture as L 1 and initializes the weights pre-trained from S 0 .",
"By doing so, the Rational Speaker takes the listener's intention into account and achieves the task goal.",
"In the real world, however, it is hardly the case that different agents have the exact same knowledge background, experiences, physical capabilities, etc.",
"The listener's decision making process is influenced by various kinds of disparities d .",
"To study speaker's ability of situated language adjustment, we created two representative types of listeners with different knowledge background and visual capabilities by training different caption grounding modules ( FT _ Transformer ) with the datasets assembled in Section 3. These disparities would challenge the speaker model to adjust the language at different levels.",
"1. L d 11 : Hypernym.",
"With limited vocabulary and knowledge in a certain domain, people tend to refer to objects in their hypernym form (e.g. animal instead of cat).",
"In this experiment, we create listeners that would refer to all the detected objects by their hypernyms.",
"This disparity would require the speaker to switch individual words that share similar meanings.",
"2. L d 21 : Limited Visual.",
"Due to the physical orientation or impaired vision capability, it is likely that some objects are blocked or hardly visible to one party but not the other.",
"In this experiment, we remove all the animal objects from listener's visual detected object list ( O ), and replace the relevant descriptions with the special token [UNK]'.",
"This disparity would require the speaker to shift attention, and choose alternative objects to describe.",
"We investigate in listeners with a subset of speaker's capabilities under the argument that in the opposite case, the listener could use only a subset of the knowledge to achieve best performance without having the speakers to adjust the speech.",
"Other disparities can be inferred through transfer learning or are left for further investigation with broader information access and datasets.",
"On top of the Rational Speaker ( S 1 ), the Pragmatic Rational Speaker incorporates a disparity adjustment layer to learn and accommodate the listener's disparity through emergent communication.",
"For i { 0 , , n } , where n = | C | : q i = MLP ( SentenceEmb ( c i )) a i = [[ t i == t ]] p i q i c = c argmax i a i (9) We use a pretrained BERT model to embed each candidate caption c i , add a single MLP layer, and approximate the REINFORCE policy through Equation 9.",
"The reward ( r c ) for each chosen caption c is +1 or 1 .",
"The loss is calculated for all the chosen captions across each batch (Eq 10).",
"We conducted the same sets of experiments using individual words (object names) instead of sentences to demonstrate the effects of working memory on disparity accommodation and internal task simulation, reducing the noise that came from the imperfection of the image description generator.",
"The simplified pipeline uses the detected object name embedding for disparity adjustment, and the listener picks the target images by conducting simple word matching.",
"and Balance of Goals.",
"Recall that each speaker model has different capabilities (Table 1) and only S d 1 is able to fulfill both task and disparity goals.",
"Implementation details and more experiment results can be found in Appendix.",
"1. [Task Performance] that measures overall accuracy of the collaborative game.",
"Task performance is often the sole evaluation metrics in previous work.",
"2. [Efficiency] that measures time used for model training across tasks.",
"3. [Transparency] that uncovers the underlying distribution shift of vocabulary use learned to accommodate different types of disparities.",
"4. [Balance of Goals] that the working memory needs to consider between the task and disparity goals to achieve maximum performance 5.1 Task Performance Comparison To assess the performance of the speakers in the collaborative game, Figure 3 presents the task accuracies with Literal Speaker ( S 0 ), Rational Speaker ( S 1 ), Pragmatic Rational Speaker ( S d 1 ), and No Disparity ( S nd 1 ).",
"S nd 1 has the same structure as S 1 and was trained on the same disparity dataset as the corresponding listener.",
"It serves as the upper bound of performance.",
"The same experiments also were conducted at the word level.",
"For each type of listener disparity, the performance is S 0 << S 1 < S d 1 < S nd 1 .",
"The vanilla Rational Speaker ( S 1 ) improved the overall performance from Literal Speaker by over 25% because it is achieving the task goal to describe the target 2834 image apart from the distractor.",
"The Pragmatic Rational Speaker ( S d 1 ) is able to learn and adjust for the listener's disparity, and further improve the game performance by 10%.",
"There is still, however a gap between S d 1 and the upper bound S nd 1 , where the speaker and the listener have the exact knowledge and capability limitation, potentially due to the imperfection in caption generations.",
"Breaking down between the hard , easy datasets in Figure 4 (recall that image pairs that differ by 4 objects are in the Hard set, otherwise the Easy set), S d 1 on the easy dataset is able to gain a lot more improvement upon its Rational Speaker compared to the pair trained on the hard dataset.",
"The gap between S d 1 and No Disparity is also a lot smaller for the model trained on the easy dataset.",
"This is likely because when a pair of images differ more objects (easier), the model has more options to adjust upon, hence the larger improvement.",
"Compared to the sentence level model, the word level pragmatic speaker for L d 11 achieves even higher improvement against the corresponding Rational Speaker.",
"They both achieve almost perfect accuracy with close to zero gap to the upper bound.",
"This suggests the high potential of the disparity adjustment design, especially after reducing the caption generation and interpretation noise.",
"To study the training efficiency of the working memory, we compared our model to the joint training Multi-Task leaning model in (Lazaridou et al., 2020)'s work, retrained and evaluated in our dataset.",
"The image captioning model and the REINFORCE Train(min) Accuracy% BLEU4 Joint 19.04 60.14 27.79 Separate 21.02 77.34 29.3 LM 11.59 29.3 WM 9.43 77.34",
"Functional in our task refers to the REINFORCE learning to achieve both task and disparity goals (evaluated by Accuracy), and structural refers to the caption generation loss for natural-sounding language (evaluated by BLEU4).",
"We used f = s = 1 as in previous work for our experiments.",
"Detailed training and comparison strategies can be found in the Appendix.",
"Table 2 shows that for each type of disparity, our model separating working memory from long-term memory is able to achieve higher accuracy and higher BLEU4 score than the joint training.",
"Moreover, the Joint Trained model needs to retrain all the weights for each type of disparity from scratch, whereas our model only needs to train the long-term memory once, and retrain the light weighted working memory for each type of disparity, which is much more efficient.",
"To gain insights in whether the Pragmatic Rational Speaker (PRS) is actually adjusting the descriptions for listeners' disparities or taking the advantage of statistical bias to achieve higher task performance, we plotted the word distribution shift across different types of disparities.",
"Qualitative examples can be found in Figure 6.",
"For each experiment, the word frequencies of all the chosen captions were calculated for the Rational Speakers, the Pragmatic Rational Speakers, and Joint Training.",
"We collected the top choice of each speaker per image 2835",
"the mean and standard deviation in Figure 5.",
"In the Hypernym disparity (Figure 5a) experiment, where the listener only understands the hypernym of detected objects, the lower-case words on the left are the top detected object names, and the upper-case words on the right are hypernyms.",
"On the left side, the word frequencies of PRS significantly dropped from the Rational Speaker.",
"On the right side, the model is maintaining similar level, or using some of the hypernyms more frequently (y-axis in log scale).",
"Note that the Rational Speaker can generate both hypernym and hyponym regardless of disparities, and multiple valid captions available for all speakers to choose from.",
"For the Joint Trained Speaker, we also observed a hyponym usage drop (left), but it's unclear how it accommodates the disparity without using hypernyms.",
"This result shows that PRS learned to avoid using hyponyms, and replaced them with their hypernym to accommodate the disparity.",
"For the Limited Visual disparity (Figure 5b), since all the animal objects are missing for the listener, there is a sharp decline in S d 21 's use of animal related words during the communication.",
"Instead, it is choosing other objects such as hat, and ball to describe the target image.",
"The PRS is accommodating listener L d 21 's disparity by shifting the attention and choosing alternative objects other than animals to communicate.",
"The behavior of the Joint Trained Speaker is harder to interpret.",
"Recall that the working memory of the Pragmatic Rational Speaker ( S d 1 ) has two two goals: 1) Task Goal : an internal simulation of a listener to rank the candidate captions by their uniqueness in describing the target image, and 2) Disparity Goal : a disparity adjustment layer to learn and accommodate the listener's disparity through interactions.",
"Each goal component can be formalized in the above two terms (Equation 11).",
"We parameterized each term with l and d to study how different l : d weight ratio could affect rational speaker's ability to achieve both goals.",
"Figure 7 shows that when the Pragmatic Rational Speaker puts a high emphasis on adjusting the listener's disparity d , it would forget to describe the unique characters of the target image and lower the overall performance.",
"On the other hand when the PRS emphasize too much on the task goal, it would forget to accommodate listener's disparities, and lower the overall performance as well.",
"In the end, we chose l : d = 1 : 1 for all experiments demonstrated above.",
"In this work, we present a novel framework based on the Rational Speech Act framework for pragmatic communication that can adjust the conversation content according to listener's disparities by adding a light-weighted working memory on top of speaker's long-term memory.",
"The Pragmatic Rational Speaker significantly improved the collaborative game performance by shifting the conversation towards the language that the listeners are able to understand.",
"There are, however, several limitations that requires further investigation.",
"First of all, despite recent progress, algorithms that connect language and the visual world are still limited.",
"For example, caption generation, even in this simple setup, often does not faithfully capture what's been conveyed in the images.",
"As our framework heavily relies on the quality of various models that bridge language and vision, e.g., as part of our long term memory, it's important to improve functionality and performance of these base models.",
"We conducted our experiments in a relative simple and artificial environment with the purpose of easy control and demonstration.",
"We emphasize on evaluating model's actual language ability of adjusting for the disparities on top of task performance.",
"The next step would be to apply the framework to more realistic images and interactive environment.",
"Other than listener's knowledge background and perceptual capabilities, there are a lot of other reasons for language communication to be adjusted, such as the physical environment, relative positions, speaker's personalities, etc.",
"Studying how a rational agent can accommodate these disparities would require additional multimodal datasets and information processing methods.",
"At the moment, the Pragmatic Rational Speaker trains a new layer in working memory from scratch for each type of disparity.",
"This could have backward influence on the long-term memory.",
"In lifelong learning (Parisi et al., 2019) like humans, the working memory can shape their long-term memory.",
"At the very least, the model could store each learned disparity adjustments for future encounter.",
"This modification is left for future work.",
"Last but not least, instead of training for every single type of disparity to name, human learners have the ability of meta-learning and zero-shot transferring existing knowledge to a new category.",
"Future work on pragmatic reasoning should be easily adaptable to different disparities and situations.",
"This work was supported in part by the National Science Foundation under grant IIS-1949634.",
"The authors would like to thank the anonymous reviewers for their valuable comments and suggestions."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"objective",
"objective",
"result",
"other",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address.",
"They tend to produce generations that",
"(i) rely too much on copying from the context,",
"(ii) contain repetitions within utterances,",
"(iii) overuse frequent words, and",
"(iv) at a deeper level, contain logical flaws.",
"In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019a) to these cases.",
"We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues.",
"For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability.",
"We demonstrate the efficacy of our approach across several dialogue tasks.",
"Open-ended tasks such as dialogue reveal a number of issues with current neural text generation methods.",
"In more strongly grounded tasks such as machine translation and image captioning, current encoder-decoder architectures provide strong performance, where mostly word-level decisions are often taken correctly by the model.",
"However, critical failings are exposed in less constrained generation: reliance on repetitive copying and overuse of frequent words, and an inability to maintain logical coherence.",
"The former shows the learning objective is faulty in that it cannot match simple statistics of the training data, while the latter touches more to the heart of artificial intelligence: (cid:63) Work done while at Facebook AI Research (FAIR).",
"these models do not understand what they are saying.",
"For example, Figure 1 shows how the 345M-parameter GPT2 model (Radford et al., 2019) can give high probability to contradictory generations.",
"In this work, we show how the recently introduced unlikelihood objective (Welleck et al., 2019a) can be generalized to remedy these problems.",
"Unlikelihood is a technique developed for removal of repetition in language model completions, and works by adding an extra term to the objective that forces repetitions to have low probability, alleviating the degenerative problems highlighted in Holtzman et al. (2019).",
"In fact, unlikelihood can be seen as a much more general framework, as we will see.",
"We first generalize unlikelihood to a different domain: dialogue, where we measure statistics of the training distribution in terms of contextual copies, within-utterance repeats, and vocabulary usage.",
"We then develop loss functions that control these statistics, providing improved metrics on several tasks.",
"Secondly, we show how the same tools can be used to address deeper semantic issues in such models.",
"By leveraging existing natural language inference (NLI) data (Welleck et al., 2019b) as supervision against poor quality generations, we train models that assign low probability to generating incoherent and contradictory text.",
"Overall, our approach yields more consistent dialogue models across several axes, and provides a promising framework for further advances.",
"Code and pre-trained models will be made available.",
"2 Dialogue Unlikelihood Training Dialogue Generation Dialogue generation consists in predicting an utterance y = ( y 1 , . . . , y | y | ) given a context x = { s 1 , . . . , s k , u 1 , . . . , u t } that consists of initial context sentences s 1: k (e.g., scenario, knowledge, personas, etc.) followed by dialogue history utterances u 1: t from speakers who take consecutive turns.",
"Likelihood Training Given a dataset D = { ( x ( i ) , y ( i ) ) } derived from a collection of human-human interactions, the standard approach to generative training for dialogue tasks is maximum likelihood estimation (MLE), that minimizes: L ( i ) MLE ( p , x ( i ) , y ( i ) ) = | y ( i ) | (cid:88) t =1 log p ( y ( i ) t | x ( i ) , y ( i ) <t ) , where x ( i ) is a gold context (dialogue history and initial context sentences) and y ( i ) is a gold next-utterance, and y ( i ) t is the t -th token of y ( i ) .",
"Likelihood-based (greedy or beam) decoding applied after training a model with this objective yields sequences with statistics that do not match the original human training sequence distribution.",
"Unlikelihood Training To control for such distribution mismatches, we employ the unlikelihood loss (Welleck et al., 2019a), generalizing it to our setting, and developing a particular form of the loss function for each type of mismatch.",
"The general form of the unlikelihood loss penalizes a set of tokens C t at each time-step, L ( i ) UL ( p , C 1: T , x , y ) = | y | (cid:88) t =1 (cid:88) y c C t ( y c ) log (1 p ( y c | x , y <t )) , where C t V is a subset of the vocabulary, and ( y c ) is a candidate-dependent scale that controls how much the candidate token should be penalized.",
"The overall objective in unlikelihood training then consists of mixing the likelihood and unlikelihood losses, L ( i ) ULE = L ( i ) MLE + L ( i ) UL , (1) https://parl.ai/projects/dialogue_ unlikelihood/ where R is the mixing hyper-parameter.",
"Likelihood tries to model the overall sequence probability distribution, while unlikelihood corrects for known biases.",
"It does this via the set of negative candidates C t calculated at each step t , where we are free to select candidate generation functions depending on the biases to be mitigated.",
"Likelihood pushes up the probability of a gold token y ( i ) t while unlikelihood pushes down the probability of negative candidate tokens y c C t .",
"In Welleck et al. (2019a) the context x consists of a ground-truth sequence ( x = x ( i ) ), the target y is either a ground-truth sequence ( y = y ( i ) ) or a model-generated sequence ( y = y ), and the per-token scale parameter ( y c ) is 1 .",
"In this paper, we demonstrate how unlikelihood can be used as a general framework by applying it to the dialogue domain.",
"We show how varying the contexts x , targets y , candidates C and scaling can be used to improve the coherence and language modeling quality of dialogue models.",
"To do this, we now consider the different biases we wish to mitigate, and construct a specific unlikelihood loss for each in turn.",
"Generative dialogue models are known to both",
"(i) rely too much on copying existing context knowledge or dialogue history; and",
"(ii) repeat themselves within individual utterances.",
"To address this with unlikelihood, we define two types of negative candidate tokens which either appear in a repeating n-gram from the context or from the generated label itself, C context-copy t = (cid:40) { y t } y t repeat context n-gram otherwise , C label-repeat t = (cid:40) { y t } y t repeating label n-gram otherwise , where y t is a token in a repeating context n-gram when y t is part of an n-gram that already appeared in the context tokens x , and is in a repeating label n-gram when y t is part of an n-gram that already appeared in y <t .",
"Given a ground-truth context x ( i ) , we apply these two forms of unlikelihood to a model-generated sequence y ( i ) .",
"In summary, we either apply the per-example loss L ( i ) UL ( p , C context-copy 1: | y | , x ( i ) , y ( i ) ) for controlling context copies, or L ( i ) UL ( p , C label-repeat 1: | y | , x ( i ) , y ( i ) ) .",
"for controlling label repeats.",
"We also consider mixing the two losses to mitigate both issues.",
"Neural sequence models trained with maximum likelihood generate sequences with token distributions that differ from those of human text (Dinan et al., 2020; Holtzman et al., 2019).",
"In particular, these models tend to produce high frequency tokens too often and low frequency tokens too rarely, where frequency is defined by the human token distribution.",
"We address this with unlikelihood by penalizing tokens according to the mismatch between the model and ground-truth unigram distributions.",
"Specifically, we first maintain an empirical estimate of the model's unigram distribution p model ( y t ) and the human distribution p ( y t ) : p model ( y t ) = count ( y t ) | Y | , where Y is a collection of token predictions on a subset of training data D (cid:48) (e.g. the preceding k = 256 batches), and count ( y t ) is the number of occurrences of y t in Y .",
"This is computed using model sequences ( y = y ) , defining Y as the collection of all tokens in all y .",
"We wish to push down the probability of tokens appearing too often, i.e. when p model ( y t ) > p ( y t ) .",
"For the unlikelihood loss, each step's candidate is thus the current token, C identity t = { y t } , and each to-ken's unlikelihood loss is scaled according to the mismatch between the approximated model and human distributions, ( y c ) = p model ( y c ) log (cid:18) p model ( y c ) p ( y c ) (cid:19) .",
"The unlikelihood loss for a token y c is non-zero when the token occurs more often in the model's estimated unigram distribution.",
"In summary, the resulting per-example loss is L ( i ) UL ( p , C identity 1: | y | , x ( i ) , y ) where y is a model-generated sequence.",
"Neural generation models appear fluent, especially when pre-trained on large datasets, but are still poor at understanding the language they produce.",
"That is, they can produce logically or factually inaccurate, or contradicting statements (Welleck et al., 2019b; Zhang et al., 2018; Hayashi et al., 2019; Petroni et al., 2019).",
"Here, we show how the unlikelihood objective can be used to train such models to assign low probability to inconsistent and contradictory utterances.",
"To do so, we assume the existence of training data of both positive and negative examples of coherent behavior.",
"There is a raft of recent large-scale, high quality data that can be massaged into this form, from natural language inference (NLI) tasks (Bowman et al., 2015; Williams et al., 2018; Welleck et al., 2019b) to commonsense reasoning tasks (Zellers et al., 2019; Qin et al., 2019).",
"Two collections of data can be derived from the labels of such a supervised task: D + = { ( x ( i ) , y ( i )+ ) } , D = { ( x ( i ) , y ( i ) ) } , where D + is coherent behavior, e.g. neutral or entailing data in NLI, and D is incoherent behavior, e.g. contradictions.",
"In general, many forms of this type of data can be collected, not just NLI, and it is also not necessary for the contexts x ( i ) to overlap as we have written here.",
"Standard likelihood training can then be performed on coherent data D + , while the unlikelihood objective is applied to D as we wish to push down the probability of generating the incoherent response y given a context x .",
"That is, given an incoherent pair ( x , y ) we use the loss LUL ( p , C identity 1: | y | , x , y ) , where we penalize each token in the target ( C identity t = { y t } ).",
"Hence, the loss makes generating the contradicting sentences less likely.",
"Our work provides new applications of unlikelihood training (Welleck et al., 2019a), showing that unlikelihood offers a general framework for improving generative models, and in particular dialogue models.",
"Outside of that work, the use of negative training in dialogue retrieval, rather than generation, has been previously extensively studied, see e.g. (Humeau et al., 2019; Nugmanova et al., 2019).",
"In the area of generative dialogue, a number of works have focused on improving the standard likelihood training approach.",
"Closer to our work is that of He and Glass (2019) which developed the approach of negative training to prevent generic and malicious responses in dialogue models.",
"In terms of improving repetition and specificity, a recent alternative approach is that of control (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; See et al., 2019).",
"Nucleus sampling (Holtzman et al., 2019) can help to remove generic or repetitive utterances at the expense of accuracy, but was shown to be inferior to beam blocking, which in turn was shown to be inferior to unlikelihood in Welleck et al. (2019a).",
"In terms of dialogue coherence, Welleck et al. (2019b) showed that retrieval, but not generative models, could be improved with NLI as a re-scorer, while Yang et al. (2018) multi-tasked with NLI.",
"The work of Gabriel et al. (2019) has also studied improving narrative flow with a discriminative rescorer, but in that case for generated language.",
"In our work, the improvements are tightly integrated into the training of the model itself.",
"In all of our experiments we employ a large pre-trained seq2seq Transformer (Vaswani et al., 2017) as our base model, which we then fine-tune for particular tasks with the objectives outlined in Section 2 and specified in each experiment below.",
"Following previous work (Humeau et al., 2019), we pre-train our model on dialogue data, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io, training to generate a comment conditioned on the full thread leading up to the comment, spanning 2200 M training examples.",
"Our Transformer model consists of an 8 layer encoder, 8 layer decoder with 512-dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of Miller et al. (2017).",
"The model was trained with a batch size of 3072 sequences for approximately 3M updates using a learning rate of 5e-4, and an inverse square root scheduler.",
"This pre-training took approximately two weeks using 64 NVIDIA V100s.",
"We use the ConvAI2 persona-based dialogue (Zhang et al., 2018), Wizard of Wikipedia",
"knowledge-grounded dialogue (Dinan et al., 2019) and ELI5 long-form question answering (Fan et al., 2019) datasets to evaluate the effect of using unlikelihood to reduce copying and repetition in model generated utterances.",
"On each dataset, we fine-tune the pre-trained pushshift.io Reddit model, then evaluate by generating next-utterances for dialogue contexts from the test set (or validation in ConvAI2, as the test set is hid-den).",
"We use greedy decoding in our main experiments for simplicity and scalability, but we also obtained similar results with beam search, shown in Appendix A. To measure label repetition in a sequence y , we use the portion of duplicate n-grams: 1 .",
"and report the metric averaged over the examples.",
"Label repetition increases from zero as the model generates more repeated n-grams.",
"To measure context repetition, we measure the fraction of gen-Repetition Model PPL F1 Context Label Human -.009 .010 MLE Baseline 21.0 .130 .033 .617 UL (Context only) 21.4 .163 .008 .322 UL (Label only) 21.4 .183 .015 .055 UL (Context + Label) 21.8 .184 .009 .078 Table 3: Evaluation on the ELI5 task test set, comparing standard likelihood (MLE) with context and label repetition unlikelihood loss training.",
"and report the metric averaged over the examples.",
"Context repetition increases when the model copies' n-grams from the context.",
"To quantify language modeling quality, we use standard perplexity and F1 metrics.",
"We use the pre-trained model fine-tuned with MLE as the baseline, and compare it against the pre-trained model fine-tuned with copy and repetition unlikelihood ( 2.1).",
"Results Results for ConvAI2 are shown in Table 1.",
"We see that training unlikelihood using only-contexts or only-labels reduces their corresponding metrics dramatically compared to the MLE baseline.",
"Training with both contextand label-repetition unlikelihood reduced both context repetitions (by 69%, .0352 vs. .1131) and label repetitions (by 89%, .0023 vs .0210) compared to the MLE baseline, much closer to human levels, while keeping perplexity essentially constant.",
"Comparatively, the Wizard of Wikipedia MLE baseline experiences a much larger problem with context repetition, due to its tendency to copy grounded knowledge verbatim (Table 2).",
"Results for ELI5, shown in Table 3, show that it has an especially large problem with label repetition, and that label-unlikelihood is able to reduce the repetitions by 91% (.055 vs .617), while significantly boosting F1 (.130 to .182).",
"Figures 2 and 3 show perplexity as a function of label and context repeats respectively using unlikelihood on ELI5.",
"The parameter can clearly control repeats smoothly, with only very high values resulting in increased perplexity.",
"Human Evaluation Finally, we perform a human evaluation using the same pairwise evaluation scheme as (Fan et al., 2019) performed on ELI5, comparing the MLE baseline to UL (Label only) which asks: Which response answers the question better?",
"The evaluators are asked to consider both the readability and accuracy of the answer.",
"Results are given in Figure 4 (left), showing a statistically sig-nificant improvement over the baseline (150 trials, two tailed binomial test, p < 0 . 01 ).",
"Further details are given in Appendix C. 4.2 Vocabulary Usage We evaluate the ability of vocabulary unlikelihood ( 2.2) to reduce the mismatch between model and human token distributions.",
"We use the ConvAI2 dataset, where our baseline is again trained using maximum likelihood.",
"Starting with the baseline model, we then fine-tune several models using vocab unlikelihood at logarithmically interpolated values of [1 , 1000] .",
"We partition the vocabulary into frequent', medium', rare', and rarest' using the human unigram distribution computed with the ConvAI2 training set, corresponding to the sorted token sets whose cumulative mass accounts for the top 40%, the next 30%, the next 20% and the final 10% of usage, respectively.",
"We evaluate a model by generating utterances given contexts from the ConvAI2 validation set, and compute the fraction of tokens within each class.",
"Results Figure 5 shows how the vocabulary distribution obtained after unlikelihood training is affected by the choice of mixing hyperparameter (Eq. 1): it can smoothly transition between the human training distribution and the MLE trained distribution (Baseline'), which is far from the human one.",
"Table 4 compares the MLE baseline with unlikelihood with increasing values in terms of distribution and F1 score.",
"The vocabulary unlikelihood fine-tuning shifts probability mass from the over-represented frequent words towards underrepresented medium and rare words, with the effect strengthening as increases.",
"At a small cost to perplexity and F1, the unlikelihood tuning reduced the overuse of common tokens by 9 points, matching the human rate, while improving the production of rare tokens by 3 percentage points.",
"Human Evaluation Finally, we perform a human evaluation using the ACUTE-EVAL framework (Li et al., 2019), comparing the MLE baseline to UL for various .",
"First, 252 human-bot conversations (8 turns each) are collected, and then models are compared pairwise by asking the question: Who would you prefer to talk to for a long conversation?",
"For these experiments we compare with both methods generating using beam with context blocking of trigrams.",
"Results are given in Figure 4 (right), showing a statistically significant improvement over the baseline according to humans (two tailed binomial test, p < 0 . 01 ).",
"Further details are given in Appendix C. 4.3 Contradictions We use the dialogue natural language inference (NLI) task of Welleck et al. (2019b) to obtain labeled non-contradicting and contradicting dialogue sentence pairs to use in unlikelihood training ( 2.3).",
"Dialogue NLI contains utterances labeled as entailing (E), neutral (N) or contradiction (C), given a premise that is either a persona sentence (an initial context sentence describing a dialogue agent's personality) or another dialogue utterance = 10 1 = 10 2 0% 25% 50% 75% 100% W i nn i n g P e r ce n t a g e Repetition (ELI5) Vocabulary (ConvAI2) MLE Baseline Unlikelihood Figure 4: Human evaluation experiments for label unlikelihood on ELI5 (left), and vocabulary unlikelihood on ConvAI2 for two values of (right).",
"from the Persona-Chat dialogue task (Zhang et al., 2018).",
"We show examples from Dialogue NLI in Figure 6: Dialogue NLI from (Welleck et al., 2019b).",
"Figure 6.",
"The original data consists of sentence pairs ( s 1 , s 2 ) along with a label (E, N, or C), and was constructed by developing a schema and employing crowdworkers to label utterances with relation triples.",
"The labels are then inferred from the triple representation.",
"We first transform the original classification dataset into a form useful for unlikelihood training of a generative dialogue model.",
"We consider two setups:",
"(i) a two utterance generation task; and",
"(ii) a full dialogue generation task.",
"Two Utterance Generation Task We adapt the initial dialogue NLI dataset by using entailing and neutral training sentence pairs as plausible positive utterances, and contradicting pairs as negatives.",
"That is, if a pair ( s 1 , s 2 ) from Dialogue NLI has label E or N, the example ( x , y ) = ( s 1 , s 2 ) is added to D + , otherwise (label C) it is added to D .",
"We consider two types of entailment: entailing sentence pairs that appear together in a dialogue in the original Persona-Chat dataset and are therefore natural (entailment'), and those that only entail via their triple relations (triple-entailment').",
"The latter are more challenging, noisier targets.",
"Evaluation is performed by measuring the test set perplexity over the four target label types, where contradictions should have relatively higher perplexity.",
"We additionally evaluate a selection accuracy task, where for each test example there are two candidate responses: a positive and a negative (contradicting) statement.",
"The candidate response with the lowest perplexity is considered to be the model's selection, and we measure the selection success rate.",
"Evaluation is broken down by positive type (entailment, triple-entailment, neutral).",
"Dataset statistics are given in Table 5.",
"Full Dialogue Task To evaluate in a more realistic setup that involves full dialogue rather than a single utterance, we take full Persona-Chat dialogues (Zhang et al., 2018) similar to Figure 6, and map back the dialogue NLI data to provide positive and negative continuations of the dialogue.",
"We consider continuations as either triple entailing utterances, neutral utterances or contradictions where the relation triple is used to match the existing persona or dialogue turns by the same speaker to induce the label.",
"That is, an example ( x , y ) consists of a dialogue history x = { p 1 , . . . , p k , u 1 , . . . , u t } and utterance y = s 2 , where ( s 1 , s 2 ) is a sentence pair from Dialogue NLI, and at least one sentence in x has the same relation triple as s 1 .",
"When the pair ( s 1 , s 2 ) is labeled as E or N in Dialogue NLI, the example ( x , y ) is added to D + , and otherwise it is added to D .",
"Results Our MLE baseline obtains a perplexity of 11.4, in line with current best systems on this task (Lewis et al., 2019).",
"Unfortunately, despite being good on such standard metrics, our baseline models fail at our coherence task.",
"As seen in Table 6 for the two utterance task, the perplexity of contradicting utterances (12.5) is on average lower than for neutral (36.7) or triple-entailing utterances (17.5), although it is higher than entailing utterances.",
"We believe this is due to contradicting utterances having high word overlap with the premise utterance, coupled with an inability to judge incoherence.",
"Viewed as a selection task between utterances, picking the utterance with the lowest perplexity, this means the selection rates of non-contradicting utterances are very low, e.g. picking neutral utterances over contradicting utterances only 18% of the time.",
"Even fully entailing utterances are only picked 73% of the time.",
"Similar results are found on the full dialogue task as well, see Table 7.",
"Unlikelihood training brings large improvements in coherence metrics, whilst minimally impacting overall dialogue perplexity.",
"After applying unlikelihood, perplexity for contradicting utterances has a clear signature, with very large av-Selection Accuracy Perplexity Data + Model Entail",
"erage values compared to entailing or neutral utterances, e.g. 248.9 vs. 9.1 for contradict vs. entail on the two utterance task.",
"This converts to corresponding large increases in selection accuracy across all types on both tasks, e.g., an increase from 18% to 78% on neutral statements on the two utterance task, and from 37.4% to 69.8% on the full dialogue task.",
"Some example model predictions are given in Figure 7, comparing the MLE baseline and unlikelihood model perplexities of generating the given hypotheses.",
"The likelihood model cannot differentiate between contradicting and entailing statements easily, while there are large perplexity differences for the unlikelihood model in these cases.",
"Generating consistent and coherent human-like dialogue is a core goal of natural language research.",
"We studied several aspects that contribute to that goal, defined metrics to measure them, and proposed algorithms that improve them, mitigating some of the failings of maximum likelihood training, the current dominant approach.",
"Our method defines objective functions under the umbrella of unlikelihood: during training, we wish to make inconsistent dialogue unlikely by lowering the probability of such events occurring.",
"This makes generative models repeat themselves less, copy the context less, and use more rare words from the vocabulary closer to matching human statistics.",
"Further, utilizing supervised datasets with labeled coherent and incoherent utterances and applying unlikelihood yields measurably improved levels of coherence with respect to the aspect measured, in this case contradiction.",
"Future work could apply this same technique with other supervised data, e.g. correcting causal or commonsense reasoning errors (Zellers et al., 2019; Qin et al., 2019)."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain"
] |
[
"Trust is implicit in many online text conversationsstriking up new friendships, or asking for tech support.",
"But trust can be betrayed through deception.",
"We study the language and dynamics of deception in the negotiation-based game Diplomacy, where seven players compete for world domination by forging and breaking alliances with each other.",
"Our study with players from the Diplomacy community gathers 17,289 messages annotated by the sender for their intended truthfulness and by the receiver for their perceived truthfulness.",
"Unlike existing datasets, this captures deception in long-lasting relationships, where the interlocutors strategically combine truth with lies to advance objectives.",
"A model that uses power dynamics and conversational contexts can predict when a lie occurs nearly as well as human players.",
"A functioning society is impossible without trust.",
"In online text interactions, users are typically trusting (Shneiderman, 2000), but this trust can be betrayed through false identities on dating sites (Toma and Hancock, 2012), spearphishing attacks (Dhamija et al., 2006), sockpuppetry (Ku-mar et al., 2017) and, more broadly, disinformation campaigns (Kumar and Shah, 2018).",
"Beyond such one-off antisocial acts directed at strangers, deception can also occur in sustained relationships, where it can be strategically combined with truthfulness to advance a long-term objective (Cornwell and Lundgren, 2001; Kaplar and Gordon, 2004).",
"We introduce a dataset to study the strategic use of deception in long-lasting relationships.",
"To collect reliable ground truth in this complex scenario, we design an interface for players to naturally generate and annotate conversational data while playing a negotiation-based game called Diplomacy.",
"These annotations are done in real-time as the players send and receive messages.",
"While this game setup might not directly translate to real-world situations, it enables computational frameworks for studying deception in a complex social context while avoiding privacy issues.",
"After providing background on the game of Diplomacy and our intended deception annotations (Section 2), we discuss our study (Section 3).",
"To probe the value of the resulting dataset, we develop lie prediction models (Section 4) and analyze their results (Section 5).",
"The Diplomacy board game places a player in the role of one of seven European powers on the eve of World War I.",
"The goal is to conquer a simplified map of Europe by ordering armies in the field against rivals.",
"Victory points determine the success of a player and allow them to build additional armies; the player who can gain and maintain the highest number of points wins.",
"1 The mechanics of the game are simple and deterministic: armies, represented as figures on a given territory, can only move to adjacent spots and the side with the most armies always wins in a disputed move.",
"The game movements become publicly available to all players after the end of a turn.",
"Because the game is deterministic and everyone begins with an equal amount of armies, a player cannot win the game without forming alliances with other playershence the name of the game: Diplomacy.",
"Conquering neighboring territories depends on support from another player's armies.",
"After an alliance has outlived its usefulness, a player often dramatically breaks it to take advantage of their erstwhile ally's vulnerability.",
"Table 1 shows the end of one such relationship.",
"As in real life, to succeed a betrayal must be a surprise to the victim.",
"Thus, players pride themselves on being able to lie and detect lies.",
"Our study uses their skill and passion to build a dataset of deception created by battle-hardened diplomats.",
"Senders annotate whether each message they write is an ACTUALLIE and recipients annotate whether each message received is a SUSPECTED LIE .",
"Further details on the annotation process are in Section 3.1.",
"Figure 1 shows the raw counts of one game in our dataset.",
"But numbers do not tell the whole story.",
"We analyze this case study using rhetorical tactics (Cialdini and Goldstein, 2004), which Oliveira et al. (2017) use to dissect spear phishing e-mails and Anand et al. (2011) apply to persuasive blogs.",
"Mentions of tactics are in italic (e.g., authority ); context for quotes in Appendix, Table 7.",
"For the rest of the paper, we will refer to players via the name of their assigned country.",
"1 In the parlance of Diplomacy games, points are supply centers in specific territories (e.g., London).",
"Having more supply centers allows a player to build more armies and win the game by capturing more than half of the 34 supply centers on the board.",
"Through two lie-intense strategiesconvincing England to betray Germany and convincing all remaining countries to agree to a drawItaly gains control of the board.",
"Italy's first deception is a plan with Austria to dismantle Turkey.",
"Turkey believes Italy's initial assurance of non-aggression in 1901.",
"Italy begins by excusing his initial silence due to a rough day at work, evoking empathy and likability .",
"While they do not fall for subsequent lies, Turkey's initial gullibility cements Italy's first-strike advantage.",
"Meanwhile, Italy proposes a long-term alliance with England against France, packaging several small truths with a big lie.",
"The strategy succeeds, eliminating Italy's greatest threat.",
"Local threats eliminated, Italy turns to rivals on the other end of the map.",
"Italy persuades England to double-cross its long-time ally Germany in a moment of scarcity : if you do not act now, there will be nowhere to expand.",
"England accepts help from ascendant Italy, expecting reciprocity .",
"However, Italy aggressively and successfully moves against England.",
"The last year features a meta-game deception.",
"After Italy becomes too powerful to contain, the remaining four players team up.",
"Ingeniously, Italy feigns acquiescence to a five-way draw, individually lying to each player and establishing authority while brokering the deal.",
"Despite Italy's record of deception, the other players believe the proposal (annotating received messages from Italy as truthful) and expect a 1907 endgame, the year with the most lies.",
"Italy goes on the offensive and knocks out Austria.",
"Italy's summary of the game in their own words is in the Appendix, Table 6.",
"Each game has relationships that are forged and then riven.",
"In another game, an honest attempt by a strong Austria to woo an ascendant Germany backfires, knocking Austria from the game.",
"Germany builds trust with Austria through a believed fictional experience as a Boy Scout in Maine ( likability ).",
"In a third game, two consecutive unfulfilled promises by an ambitious Russia leads to a quick demise, as their subsequent excuses and apologies are perceived as lies (failed consistency ).",
"In another game, England, France, and Russia simultaneously attack Germany after offering duplicitous assurances.",
"Game outcomes vary despite the identical, balanced starting board, as different players use unique strategies to persuade, and occasionally deceive, their opponents.",
"Statements can be incorrect for a host of reasons: ignorance, misunderstanding, omission, exaggeration.",
"Gokhman et al. (2012) highlight the difficulty of finding willful, honest, and skilled deception outside of short-term, artificial contexts (De-Paulo et al., 2003).",
"Crowdsourced and automatic datasets rely on simple negations (Prez-Rosas et al., 2017) or completely implausible claims (e.g., Tipper Gore was created in 1048 from Thorne et al. (2018)).",
"While lawyers in depositions and users of dating sites will not willingly admit to their lies, the players of online games are more willing to revel in their deception.",
"We must first define what we mean by deception.",
"Lying is a mischaracterization; it's thus no surprise that a definition may be divisive or the subject of academic debate (Gettier, 1963).",
"We provide this definition to our users: Typically, when [someone] Figure 2: Every time they send a message, players say whether the message is truthful or intended to deceive. The receiver then labels whether incoming messages are a lie or not. Here Italy indicates they believe a message from England is truthful but that their reply is not. lies [they] say what [they] know to be false in an attempt to deceive the listener (Siegler, 1966).",
"An orthodox definition requires the speaker to utter an explicit falsehood (Mahon, 2016); skilled liars can deceive with a patina of veracity.",
"A similar definition is required for prosecution of perjury, leading to a paucity of convictions (Bogner et al., 1974).",
"Indeed, when we ask participants what a lie looks like, they mention evasiveness, shorter messages, over-qualification, and creating false hypothetical scenarios (DePaulo et al., 2003).",
"Previous work on the language of Diplomacy (Nic-ulae et al., 2015) lacked access to players' internal state and was limited to post-hoc analysis.",
"We improve on this by designing our own interface that gathers players' intentions and perceptions in real-time (Section 3.1).",
"As with other highly subjective phenomena like sarcasm (Gonzlez-Ibez et al., 2011; Bamman and Smith, 2015), sentiment (Pang et al., 2008) and framing (Greene and Resnik, 2009), the intention to deceive is reflective on someone's internal state.",
"Having individuals provide their own labels for their internal state is essential as third party annotators could not accurately access it (Chang et al., 2020).",
"Most importantly, our gracious players have allowed this language data to be released in accordance with IRB authorized anonymization, encouraging further work on the strategic use of deception in long-lasting relations.",
"2 2 Data available at http://go.umd.edu/diplomacy_data and as part of ConvoKit http://convokit.cornell.edu .",
"This dataset requires both a social and technical setup: finding a community that plays Diplomacy online and having them use a framework for annotating these messages.",
"We need two technical components for our study: a game engine and a chat system.",
"We choose Backstabbr 3 as an accessible game engine on desktop and mobile platforms: players input their moves and the site adjudicates game mechanics (Chiodini, 2020).",
"Our communication framework is atypical.",
"Thus, we create a server on Discord, 4 the group messaging platform most used for online gaming and by the online Diplomacy community (Coberly, 2019).",
"The app is reliable on both desktop and mobile devices, free, and does not limit access to messages.",
"Instead of direct communication, players communicate with a bot; the bot does not forward messages to the recipient until the player annotates the messages (Figure 2).",
"In addition, the bot scrapes the game state from Backstabbr to sync game and language data.",
"Annotation of lies is a forced binary choice in our experiment.",
"Explicitly calling a statement a lie is difficult, and people would prefer degrees of deception (Bavelas et al., 1990; Bell and DePaulo, 1996).",
"Thus, we follow previous work that views linguistic deception as binary (Buller et al., 1996; Braun and Van Swol, 2016).",
"Some studies make a more fine-grained distinction; for example, Swol et al. (2012) separate strategic omissions from blatant lies (we consider both deception).",
"However, because we are asking the speakers themselves (and not trained annotators) to make the decision, we follow the advice from crowdsourcing to simplify the task as much as possible (Snow et al., 2008; Sabou et al., 2014).",
"Long messages can contain both truths and lies, and we ask players to categorize these as lies since the truth can be a shroud for their aims.",
"The Diplomacy players maintain an active, vibrant community through real-life meetups and online play (Hill, 2014; Chiodini, 2020).",
"We recruit top players alongside inexperienced but committed players in the interest of having a diverse pool.",
"Our experiments include top-ranked players and community leaders from online platforms, grizzled in-person tournament players with over 100 past games, and board game aficionados.",
"These players serve as our foundation and during initial design helped us to create a minimally annoying interface and a definition of a lie that would be consistent with Diplomacy play.",
"Good playersas determined by active participation, annotation and game outcomeare asked to play in future games.",
"In traditional crowdsourcing tasks compensation is tied to piecework that takes seconds to complete (Buhrmester et al., 2011).",
"Diplomacy games are different in that they can last a month.",
".",
". and people already play the game for free.",
"Thus, we do not want compensation to interfere with what these players already do well: lying.",
"Even the obituary of the game's inventor explains Diplomacy rewards all manner of mendacity: spying, lying, bribery, rumor mongering, psychological manipulation, outright intimidation, betrayal, vengeance and backstabbing (the use of actual cutlery is discouraged) (Fox, 2013).",
"Thus, our goal is to have compensation mechanisms that get people to play this game as they normally would, finish their games, and put up with our (slightly) cumbersome interface.",
"Part of the compensation is non-monetary: a game experience with players that are more engaged than the average online player.",
"To encourage complete games, most of the payment is conditioned on finishing a game, with rewards for doing well in the game.",
"Players get at least $40 upon finishing a game.",
"Additionally, we provide bonuses for specific outcomes: $24 for winning the game (an evenly divisible amount that can be split among remaining players) and $10 for having the most successful lies, i.e., statements they marked as a lie that others believed.",
"5 Diplomacy usually ends with a handful of players dividing the board among themselves and agreeing to a tie.",
"In the game described in Section 2.1, the remaining four players shared the winner's pool with Italy after 10 in-game years, and Italy won the prize for most successful lies.",
"5 The lie incentive is relatively small (compared to incentives for participation and winning) to discourage an opportunistic player from marking everything as a lie.",
"Games were monitored in real-time and no player was found abusing the system (marking more than 20% lies).",
"Table 2 quantitatively summarizes our data.",
"Messages vary in length and can be paragraphs long (Figure 3).",
"Close to five percent of all messages in the dataset are marked as lies and almost the same percentage (but not necessarily the same messages) are perceived as lies, consistent with the veracity effect (Levine et al., 1999).",
"In the game discussed above, eight percent of messages are marked as lies by the sender and three percent of messages are perceived as lies by the recipient; however, the messages perceived as lies are rarely lies (Figure 4).",
"We collect anonymous demographic information from our study participants: the average player identifies as male, between 20 and 35 years old, speaks English as their primary language, and has played over fifty Diplomacy games.",
"6 Players self-assess their lying ability before the study.",
"The average player views themselves as better than average at lying and average or better than average at perceiving lies.",
"6 Our data skews 80% male and 95% of the players speak English as a primary language.",
"Ages range from eighteen and sixty-four.",
"Game experience is distributed across beginner, intermediate, and expert levels.",
"In a post-game survey, players provide information on whom they betrayed and who betrayed them in a given game.",
"This is a finer-grained determination than the post hoc analysis used in past work on Diplomacy (Niculae et al., 2015).",
"We ask players to optionally provide linguistic cues to their lying and to summarize the game from their perspective (examples in Appendix, Table 6).",
"Four possible combinations of deception and perception can arise from our data.",
"The sender can be lying or telling the truth.",
"Additionally, the receiver can perceive the message as deceptive or truthful.",
"We name the possible outcomes for lies as Deceived or Caught, and the outcomes for truthful messages as Straightforward or Cassandra, 7 based on the receiver's annotation (examples in Table 3, distribution in Figure 4).",
"We build computational models both to detect lies to better understand our dataset.",
"The data from the user study provide a training corpus that maps language to annotations of truthfulness and deception.",
"Our models progressively integrate informationconversational context and in-game power dynamicsto approach human parity in deception detection.",
"7 In myth, Cassandra was cursed to utter true prophecies but never be believed.",
"For a discussion of Cassandra's curse vis a vis personal and political oaths, see Torrance (2015).",
"We investigate two phenomena: detecting what is intended as a lie and what is perceived as a lie.",
"However, this is complicated because most statements are not lies: less than five percent of the messages are labeled as lies in both the ACTUAL LIE and the SUSPECTED LIE tasks (Table 2).",
"Our results use a weighted F 1 feature across truth and lie prediction, as accuracy is an inflated metric given the class imbalance (Japkowicz and Stephen, 2002).",
"We thus adopt an in-training approach (Zhou and Liu, 2005) where incorrect predictions of lies are penalized more than truthful statements.",
"The relative penalty between the two classes is a hyper-parameter tuned on F 1 .",
"Before we move to computational models for lie detection, we first establish the human baseline.",
"We know when senders were lying and when receivers spotted a lie.",
"Humans spot 88.3% of lies.",
"However, given the class imbalance, this sounds better than it is.",
"Following the suggestion of Levine et al. (1999), we focus on the detection of lies, where humans have a 22.5 Lie F 1 .",
"To prevent overfitting to specific games, nine games are used as training data, one is used for validation for tuning parameters, and two games are test data.",
"Some players repeat between games.",
"Logistic regression models have interpretable coefficients which show linguistic phenomena that correlate with lies.",
"A word that occurs infrequently overall but often in lies, such as honest' and can-didly', helps identify which messages are lies.",
"Niculae et al. (2015) propose linguistic Harbingers that can predict deception.",
"These are word lists that cover topics often used in interpersonal communication claims , subjectivity , premises , contingency , comparisons , expansion , temporal language associated with the future , and all other temporal language (complete word list in Appendix, Table 8).",
"The Harbingers word lists do not provide full coverage, as they focus on specific rhetorical areas.",
"A logistic regression model with all word types as features further improves F 1 .",
"Power dynamics influence the language and flow of conversation (Danescu-Niculescu-Mizil et al., 2012, 2013; Prabhakaran et al., 2013).",
"These dynamics may influence the likeliness of lying; a stronger player may feel empowered to lie to their neighbor.",
"Recall that victory points (Section 2) encode how well a player is doing (more is better).",
"We represent the power differential as the difference between the two players.",
"Peers will have a zero differential, while more powerful players will have a positive differential with their interlocutor.",
"The differential changes throughout the game, so this feature encodes the difference in the season the message was sent.",
"For example, a message sent by an Italy with seven points to a Germany with two points in a given season would have a value of five.",
"While less interpretable, neural models are often more accurate than logistic regression ones (Ribeiro et al., 2016; Belinkov and Glass, 2019).",
"We build a standard long short-term memory network (Hochreiter and Schmidhuber, 1997, LSTM ) to investigate if word sequencesignored by logistic regressioncan reveal lies.",
"Integrating message context and power dynamics improves on the neural baseline.",
"A Hierarchical LSTM can help focus attention on specific phrases in long conversational contexts.",
"In the same way it would be difficult for a human to determine prima facie if a statement is a lie without previous context, we posit that methods that operate at the level of a single message are limited in the types of cues they Human Context LSTM+Power+BERT Context LSTM+Power Context LSTM+BERT Context LSTMLSTM Bag of Words+PowerBag of Words Harbingers+PowerHarbingersMajority Class Random 58.1 56.1 57.2 52.7 55.8 53.8 54.9 54.3 52.9 52.8 47.8 39.8 Macro F1 22.5 20.9 27.0 13.5 19.2 13.7 20.2 19.1 23.7 24.6 14.9 Lie F1 A c t u a l L i e 0 20 40 60 Context LSTM+Power+BERT Context LSTM+Power Context LSTM+BERT Context LSTMLSTM Bag of Words+PowerBag of Words Harbingers+PowerHarbingersMajority Class Random 53.6 53.3 53.3 54.3 53.8 51.6 51.5 45.1 45.9 48.3 38.3 0 10 20 12.4 13.0 15.1 15.0 13.6 13.9 13.7 15.5 14.7 11.8 S u s p e c t e d L i e Figure 5: Test set results for both our ACTUAL LIE and SUSPECTED LIE tasks.",
"can extract.",
"The hierarchical LSTM is given the context of previous messages when determining if a given message is a lie, which is akin to the labeling task humans do when annotating the data.",
"The model does this by encoding a single message from the tokens, and then running a forward LSTM over all the messages.",
"For each message, it looks at both the content and previous context to decide if the current message is a lie.",
"Fine-tuning BERT (Devlin et al., 2019) embeddings to this model did not lead to notable improvement in F 1 , likely due to the relative small size of our training data.",
"Last, we incorporate information about power imbalance into this model.",
"This model approaches human performance in terms of F 1 score by combining content with conversational context and power imbalance.",
"This section examines specific messages where both players and machines are correctly identifying lies and when they make mistakes on our test set.",
"Most messages are correctly predicted by both the model and players (2055 of 2475 messages); but this is because of the veracity effect.",
"The picture is less rosy if we only look at messages the sender marks as ACTUAL LIE : both players and models are generally wrong (Table 5).",
"Both models and players can detect lies when liars get into specifics.",
"In Diplomacy, users must agree to help one another through orders that stipulate I will help another player move from X to Y.",
"The in-game term for this is support; half the messages where players and computers correctly identify lies contain this word, but it rarely occurs in the other quadrants.",
"Models seem to be better at not falling for vague excuses or fantastical promises in the future.",
"Players miss lies that promise long-term alliances, involve extensive apologies, or attribute motivation as coming from other countries' disinformation ( Model Correct ).",
"Unlike our models, players have access to conversations with other players and accordingly players can detect lies that can easily be verified through conversations with other players ( Player Correct ).",
"However, ultimately most lies are believable and fool both models and players ( Both Wrong ).",
"For example, all messages that contain the word true are predicted as truthful by both models and play-Model Prediction Correct Wrong P l ay er P re d i c t i o n Correct Both Correct Not sure what your plan is, but I might be able to support you to Munich.",
"ers.",
"Many of these messages are relatively tame; 8 confirming the Pinocchio effect found by Swol et al. (2012).",
"If liars can be detected when they wax prolix, perhaps the best way to avoid detection is to be terse and to the point.",
"Sometimes additional contextual information helps models improve over player predictions.",
"For example, when France tells Austria I am worried about a steamroller Russia Turkey alliance, the message is incorrectly perceived as truthful by both the player and the single-message model.",
"However, once the model has contexta preceding question asking if Austria and Turkey were cooperatingit can detect the lie.",
"Finally, we investigate categories from the Harbingers (Niculae et al., 2015) word lists.",
"Lies are more likely to contain subjectivity and premises while true messages include expansion phrases (later, additionally).",
"We also use specific words in the bag of words logistic regression model.",
"The coefficient weights of words that express sincerity (e.g., sincerely, frankly) and apology (e.g., accusation, fallout, alterna-tives) skew toward ACTUAL LIE prediction in the logistic regression model.",
"More laid back appella-8 Examples include It's true[Budapest] back to [Ru-mania] and [Serbia] on to [Albania] could position for more forward convoys without needing the rear fleet... and idk if it's true just letting u know since were allies.",
"tions (e.g., dude, man) skew towards truthfulness, as do words associated with reconnaissance (e.g., fyi,useful, information) and time (e.g., weekend, morning).",
"Contested areas on the Diplomacy map, such as Budapest and Sevastopol, are more likely to be associated with lies, while more secure ones like Berlin, are more likely to be associated with truthful messages.",
"Early computational deception work focuses on single utterances (Newman et al., 2003), especially for product reviews (Ott et al., 2012).",
"But deception is intrinsically a discursive phenomenon and thus the context in which it appears is essential.",
"Our platform provides an opportunity to observe deception in the context in which it arises: goal-oriented conversations around in-game objectives.",
"Gathering data through an interactive game has a cheaper per-lie cost than hiring workers to write deceptive statements (Jurgens and Navigli, 2014).",
"Other conversational datasets are mostly based on games that involve deception including Werewolf (Girlea et al., 2016), Box of Lies (Soldner et al., 2019), and tailor-made games (Ho et al., 2017).",
"However, these games assign individuals roles that they maintain throughout the game (i.e., in a role that is supposed to deceive or in a role that is deceived).",
"Thus, deception labels are coarse: an individual always lies or always tells the truth.",
"In contrast, our platform better captures a more multifaceted reality about human nature: everyone can lie or be truthful with everyone else, and they use both strategically.",
"Hence, players must think about every player lying at any moment: given the evidence, do I think this person is lying to me now ?",
"Deception data with conversational labels is also available through interviews (Prez-Rosas et al., 2016), some of which allow for finer-grained deception spans (Levitan et al., 2018).",
"Compared with game-sourced data, however, interviews provide shorter conversational context (often only a single exchange with a few follow-ups) and lack a strategic incentiveindividuals lie because they are instructed to do so, not to strategically accomplish a larger goal.",
"In Diplomacy, users have an intrinsic motivation to lie; they have entertainment-based and financial motivations to win the game.",
"This leads to higher-quality, creative lies.",
"Real-world examples of lying include perjury (Louwerse et al., 2010), calumny (Fornaciari and Poesio, 2013), emails from malicious hackers (Dhamija et al., 2006), and surreptitious user recordings.",
"But real-world data comes with real-world complications and privacy concerns.",
"The artifice of Diplomacy allows us to gather pertinent language data with minimal risk and to access both sides of deception: intention and perception.",
"Other avenues for less secure research include analyzing dating profiles for accuracy in self-presentation (Toma and Hancock, 2012) and classifying deceptive online spam (Ott et al., 2011).",
"In Dante's Inferno , the ninth circle of Hella fate worse even than that reserved for murderersis for betrayers.",
"Dante asks Count Ugolino to name his betrayer, which leads him to say: but if my words can be the seed to bear the fruit of infamy for this betrayer who feeds my hunger, then I shall speakin tears (Alighieri and Musa, 1995, Canto XXXIII) Similarly, we ask victims to expose their betrayers in the game of Diplomacy.",
"The seeds of players' negotiations and deceit could, we hope, yield fruit to help others: understanding multi-party negotiation and protecting Internet users.",
"While we ignore nuances of the game board to keep our work general, Diplomacy is also a rich, multi-agent strategic environment; Paquette et al. (2019) ignore Diplomacy's rich language to build bots that only move pieces around the board.",
"An exciting synthesis would incorporate deception and language generation into an agent's policy; our data would help train such agents.",
"Beyond playing against humans, playing with a human in the loop ( HITL ) resembles designs for cybersecu-rity threats (Cranor, 2008), annotation (Branson et al., 2010), and language alteration (Wallace et al., 2019).",
"Likewise, our lie-detection models can help a user in the moment better decide whether they are being deceived (Lai et al., 2020).",
"Computers can meld their attention to detail and nigh infinite memory to humans' grasp of social interactions and nuance to forge a more discerning player.",
"Beyond a silly board game, humans often need help verifying claims are true when evaluating health information (Xie and Bugg, 2009), knowing when to take an e-mail at face value (Jagatic et al., 2007), or evaluating breaking news (Hassan et al., 2017).",
"Building systems to help information consumers become more discerning and suspicious in low-stakes settings like online Diplomacy are the seeds that will bear the fruits of interfaces and machine learning tools necessary for a safer and more robust Internet ecosystem.",
"We thank Chris Martin for the introduction to the Diplomacy community and for study suggestions.",
"Feedback from Philip Resnik, Alexander Fraser, Bill Ferguson, James Ryan, and Vinodkumar Prabhakaran helped shape the paper's structure.",
"The information provided in this document is derived from an effort sponsored by the Defense Advanced Research Projects Agency ( DARPA ) and Air Force Research Laboratory ( AFRL ), and awarded to Raytheon BBN Technologies under contract number FA865018-C-7885.",
"Danescu-Niculescu-Mizil is supported by NSF award IIS -1750615 and by NSF grant IIS -1910147.",
"Opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect views of the sponsors.",
"We thank Sebastien A., Joe Brelsford ( Trust-worthyWarMonger ), Sam Brothers, Max Christie, Jordan Connors ( Conq ), Anna Conte, Bill Hack-enbracht, Jack Henrichs, Melissa Lewis, Michael Lotfy ( Blitzkrieg13 ), Joshua Lovett-Graff, Mitch McConeghey, Marko Papic, Christopher Rawles, David Van Slyke ( happypopday ), Reno Vargh-ese, Tyler Waaler, Joseph Wheeler ( Sloth ), Phillip Wilcox, Jorge Zhang ( Caped Baldy ), Daniel Zhu, papa_k , questionmark , and the dozens of other players that made the games possible."
] | [
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"We introduce a new dataset for Q uestion Re writing in C onversational C ontext (QReCC), which contains 14K conversations with 80K question-answer pairs.",
"The task in QReCC is to find answers to conversational questions within a collection of 10M web pages (split into 54M passages).",
"Answers to questions in the same conversation may be distributed across several web pages.",
"QReCC provides annotations that allow us to train and evaluate individual subtasks of question rewriting, passage retrieval and reading comprehension required for the end-to-end conversational question answering (QA) task.",
"We report the effectiveness of a strong baseline approach that combines the state-of-the-art model for question rewriting, and competitive models for open-domain QA.",
"Our results set the first baseline for the QReCC dataset with F1 of 19.10, compared to the human upper bound of 75.45, indicating the difficulty of the setup and a large room for improvement.",
"It is often not possible to address a complex information need with a single question.",
"Consequently, there is a clear need to extend open-domain question answering (QA) to a conversational setting.",
"This task is commonly referred to as conversational (interactive or sequential) QA (Webb, 2006; Saeidi et al., 2018; Reddy et al., 2019).",
"Conversational QA requests an answer conditioned on both the question and the previous conversation turns as context.",
"Previously proposed large-scale benchmarks for conversational QA, such as QuAC and CoQA, limit the topic of conversation to the content of a single document.",
"In practice, however, the answers can be distributed across several documents Equal contribution.",
"that are relevant to the conversation, or the topic of the conversation may also drift.",
"To investigate this phenomena and develop approaches suitable for the complexities of this task, we introduce a new dataset for open-domain conversational QA, called QReCC.",
"1 The dataset consists of 13.6K conversations with an average of 6 turns per conversation.",
"A conversation in QReCC consists of a sequence of question-answer pairs.",
"The answers to questions were produced by human annotators, who looked up relevant information on the web using a search engine.",
"QReCC is therefore the first large-scale dataset for conversational QA that incorporates an information retrieval subtask.",
"QReCC is accompanied with scripts for building a collection of passages from the Common Crawl and the Wayback Machine for passage retrieval.",
"QReCC is inspired by the task of question rewriting (QR) that allows us to reduce the task of conversational QA to non-conversational QA by 1 https://github.com/apple/ml-qrecc generating self-contained versions of contextually-dependent questions.",
"QR was recently shown crucial for porting retrieval QA architectures to a conversational setting (Dalton et al., 2019).",
"Follow-up questions in conversational QA often depend on the previous conversation turns due to ellipsis (miss-ing content) and coreference (anaphora).",
"Every question-answer pair in QReCC is also annotated with a question rewrite.",
"We evaluate the quality of these rewrites as self-contained questions in terms of the ability of the rewritten question, when used as input to the web search engine, to retrieve the correct answer.",
"A snippet of a sample QReCC conversation is given in Figure 1.",
"The dataset collection included two phases: (1) dialogue collection, and (2) document collection.",
"First, we set up an annotation task to col-lect dialogues with question-answer pairs along with question rewrites and answer provenance links.",
"Second, after all dialogues were collected we downloaded the web pages using the provenance links, and then extended this set with a random sample of other web pages from Common Crawl, preprocessed and split the pages into passages.",
"To produce the first baseline, we augment an open-domain QA model with a QR component that allows us to extend it to a conversational scenario.",
"We evaluate this approach on the QReCC dataset, reporting the end-to-end effectiveness as well as the effectiveness on the individual subtasks separately.",
"Our contributions.",
"We collected the first large-scale dataset for end-to-end, open-domain conversational QA that contains question rewrites that incorporate conversational context.",
"We present a systematic comparison of existing automatic evaluation metrics on assessing the quality of question rewrites and show the metrics that best correlate with human judgement.",
"We show empirically that QR provides a unified and effective solution for resolving references both co-reference and ellipsis in multi-turn dialogue setting and positively impacts the conversational QA task.",
"We evaluate the dataset using a baseline that incorporates the state-of-the-art model in QR and competitive models for passage retrieval and answer extraction.",
"This dataset provides a resource for the community to develop, evaluate, and advance methods for end-to-end, open-domain conversational QA.",
"QReCC builds upon three publicly available datasets and further extends them to the open-domain conversational QA setting: Question Answering in Context (QuAC) (Choi et al., 2018), TREC Conversational Assistant Track (CAsT) (Dalton et al., 2019) and Natural Questions (NQ) (Kwiatkowski et al., 2019).",
"QReCC is the first large-scale dataset that supports the tasks of QR, passage retrieval, and reading comprehension (see Table 1 for the dataset comparison).",
"Open-domain QA.",
"Reading comprehension (RC) approaches were recently extended to incorporate a retrieval subtask (Chen et al., 2017; Yang et al., 2019; Lee et al., 2019).",
"This task is also referred to as machine reading at scale (Chen et al., 2017) or end-to-end QA (Yang et al., 2019).",
"In this setup a reading comprehension component is preceded by a document retrieval component.",
"The answer spans are extracted from documents retrieved from a document collection, given as input.",
"The standard approach to end-to-end open-domain QA is (1) use an efficient filtering approach to reduce the number of candidate passages to the topk of the most relevant ones (usually BM25 based on the bag-of-words representation); and then (2) re-rank the subset of the topk relevant passages using a more fine-grained approach, such as BERT based on vector representations (Yang et al., 2019).",
"Conversational QA.",
"Independently from end-to-end QA, the RC task was extended to a conversational setting, in which answer extraction is conditioned not only on the question but also on the previous conversation turns (Choi et al., 2018; Reddy et al., 2019).",
"The first attempt at extending the task of information retrieval (IR) to a conversational setting was the recent TREC CAsT 2019 task (Dalton et al., 2019).",
"The challenge was to rank passages from a passage collection by their relevance to an input question in the context of a conversation history.",
"The size of the collection in CAsT 2019 was 38.4M passages, requiring efficient IR approaches.",
"As efficient retrieval approaches operate on bag-of-words representations they need a different way to handle conversational context since they can not be trained end-to-end using a latent representation of the conversational context.",
"A solution to this computational bottleneck was a QR model that learns to sample tokens from the conversational context as a pre-processing step before QA.",
"Question Rewriting.",
"CANARD (Elgohary et al., 2019) provides rewrites for the conversational questions from the QuAC dataset.",
"QR effectively modi-fies all follow-up questions such that they can be correctly interpreted outside of the conversational context as well.",
"This extension to the conversational QA task proved especially useful while allowing retrieval models to incorporate conversational context (Voskarides et al., 2020; Vakulenko et al., 2020; Lin et al., 2020).",
"More recently, Qu et al. introduced OR-QuAC dataset that was automatically constructed from QuAC and CANARD datasets.",
"OR-QuAC uses the same rewrites and answers as the ones provided in QuAC and CANARD.",
"In contrast to OR-QuAC, the answers in QReCC are not tied to a single Wikipedia page.",
"The answers can be distributed across several web pages.",
"QReCC's passage collection is also larger and more diverse: 11M passages from Wikipedia in OR-QuAC vs. 54M passages from CommonCrawl in QReCC.",
"The answers in OR-QuAC are single spans, whereas QReCC answers were produced by human annotators instructed to imitate natural conversational answers and may include several spans from different parts of the same web page.",
"TREC CAsT 2019 paved the way to conversational QA for retrieval but had several important limitations: (1) no training data and (2) no answer spans.",
"First, the size of the CAsT dataset is limited to 80 dialogues, which is nowhere enough for training a machine-learning model.",
"This was also the reason why CANARD played such an important role for the development of retrieval-based approaches even though it was collected as a RC dataset.",
"Second, the task in TREC CAsT 2019 was conversational passage retrieval not extractive QA since the expected output was ranked passages and not a text span.",
"We designed QReCC to overcome both of these limitations.",
"The size of the QReCC dataset is comparable with other large-scale conversational QA datasets (see Table 1).",
"The most relevant to our work is the concurrent work by Ren et al., who extended the TREC CAsT dataset with crowd-sourced answer spans.",
"Since the size of this dataset is inadequate for training a machine-learning model and can be used only for evaluation, the authors train their models on the MS MARCO dataset instead, which is a non-conversational QA dataset (Bajaj et al., 2016).",
"Their evaluation results show how the performance degrades due to the lack of conversational training data.",
"TREC CAsT will continue in the future and the QReCC dataset provides a valuable benchmark helping to train and evaluate novel conversational QA approaches.",
"To simplify the data collection task we decided to use questions from pre-existing QA datasets as seeds for dialogues in QReCC.",
"We used questions from QuAC, CAsT and NQ.",
"While QuAC and CAsT datasets contain question sequences, NQ is not a conversational dataset but contains stand-alone questions from web search.",
"We use the NQ dataset to increase and diversify the number of samples beyond QuAC and CAsT by generating more rewrites for cases beyond coreference resolution.",
"The majority of the follow-up questions in QuAC require coreference resolution for QR.",
"Therefore, we explicitly instructed the annotators to use NQ as a start of a conversation and then come up with relevant follow-up questions, which would require generation of missing content, i.e., ellipsis, instead of coreference resolution for QR.",
"The task for the annotators was also to answer questions using a web search engine.",
"Question rewrites were used as input to a search engine.",
"This Table 2: Summary statistics for the QReCC dataset.",
"setup helps to obtain feedback on the quality of QR with respect to the effectiveness of answer retrieval (see Section 6 for more details on using search results for the evaluating QR).",
"Finally, the question-answer pair is annotated with the link to the web page that was used to produce the answer.",
"Thereby, every dialogue was produced by the same annotator including the questions, answers and rewrites.",
"This design decision is called self-dialog technique that was shown to help improve quality of the data by avoiding some of the challenges observed in simulated dialogues produced by pairs of annotators (Byrne et al., 2019).",
"A team of 30 professional annotators with a project lead were employed to perform the task.",
"The annotation task was described in the guidelines (see Appendix B for more details).",
"To ensure the quality of the annotations we followed a posthoc evaluation procedure, in which 5 reviewers go through the dataset and update incorrect examples they identify with consensus.",
"QReCC contains 13,598 dialogues with 79,952 questions in total.",
"9.3K dialogues are based on the questions from QuAC; 80 are from TREC CAsT; and 4.4K are from NQ.",
"9% of questions in QReCC do not have answers.",
"We still retained the question rewrites even if no answer was found on the web.",
"112 questions were annotated with links to web pages without answer texts, e.g. May I have a link to road signs in Singapore?",
"We prepared three standard dataset splits and ensured that they are balanced in terms of the standard dialogue statistics and the types of QR (see Table 2).",
"We distinguish four types of QR.",
"They differ with respect to the intervention required to resolve contextual dependencies in dialogue.",
"These Figure 2: The 10 most frequently replaced tokens in QReCC.",
"types can be automatically identified by measuring the difference between an original question Q and a question rewrite R that are represented as sets using the bag-of-words: Insertion new tokens are added to the original question to produce the rewrite (e.g., What are some of the main types What are some of the main types of Yoga ?): Q \\ R = R \\ Q Removal some tokens are removed from the question to produce the rewrite (e.g., Can you tell me about the C++ language mentioned Can you tell me about the C++ language): Q \\ R R \\ Q = Replacement some tokens are added and some are removed to produce the rewrite (e.g., Does it help in reducing stress Does Yoga help in reducing stress): Q \\ R R \\ Q Copy no modification is needed, i.e., the original question is already contextually independent (e.g., What are common poses in Kundalini Yoga?): Q \\ R = R \\ Q = , i.e., Q = R The majority of questions in QReCC (52%) require Replacement .",
"Figure 2 shows the tokens that are most frequently replaced in QR.",
"All of them are pronouns that require anaphora resolution.",
"By specifically targeting more rare types of question rewriting in our data collection task we managed to increase the proportion of the Insertion cases in our dataset.",
"This allows us to train and evaluate the ability of the model to reconstruct missing context, which cannot be achieved using traditional co-reference resolution approaches.",
"We download the web pages using the answer provenance links provided by the annotators from the Internet Archive Wayback Machine.",
"2 Then, we complement the relevant pages with randomly sampled web pages that constitute 1% of the Common Crawl dataset identified as English pages.",
"The final collection consists of approximately 14K pages from the Wayback Machine and 9.9M random web pages from the Common Crawl dataset.",
"The scripts for reproducing the passage collection are on GitHub.",
"See Appendix A.2 for more details.",
"After downloading the pages we extract the textual content from the HTML and split texts into passages of least 220 tokens.",
"After segmentation, we have a total of 54M passages which we index using Anserini (Yang et al., 2017).",
"We search the passage collection using the human annotated answers to augment the dataset with alternative sources of correct answers.",
"For each document returned, we identify the span in the document that has the highest token overlap (F1) with the human answer.",
"We consider all documents with F1 0.8 as relevant.",
"Verifying adequacy of this simple heuristic by human annotators is left for future work.",
"BLEU has typically been used in previous work for measuring the quality of QR (Elgohary et al., 2019; Lin et al., 2020).",
"We conduct a systematic evaluation and compare BLEU with alternative metrics, previously applied in summarization and translation, to ensure the most reliable metrics we can obtain for the model selection.",
"Our evaluation shows that BLEU does not compare favourably with other metrics in evaluating the quality of QR.",
"Task.",
"We took a random sample of 10K questions and used a seq-to-seq model (Nallapati et al., 2016) trained with questions and conversation context from the QReCC dataset to generate question rewrites.",
"These generated rewrites were compared to the ground truth rewrites produced by human annotators.",
"Different annotators graded each model-generated rewrite with a binary label: 0 (incor-rect rewrite) or 1 (correct rewrite).",
"For a question rewrite to be correct it does not have to exactly 2 We use the version of a web page, which is the closest to the end date of the dialogue collection (November 24, 2019).",
"match the ground truth rewrite, but it should correctly capture the conversational context and be a self-contained question.",
"For example, the model-generated rewrite What are the global warming dangers? is a correct rewrite with the ground truth rewrite being What are the dangers of global warming?.",
"In addition, we also assess the variance of the human assessments.",
"The Pearson correlation between any two annotators on average is 0.94.",
"We observed the mean and the variance to be 0.083 and 0.076 respectively.",
"Performing a two-tail statistical significance test shows the P-value to be 0.0201.",
"We use several automated metrics to compare the rewrites with the ground truth and compute their Pearson correlation with the human judgements (see Table 3 for results).",
"Exact Match is a binary variable that indicates the token set overlap applied after the standard preprocessing: lower-casing, stemming, punctuation and stopword removal.",
"ROUGE (Lin, 2004) reflects similarity between two texts in terms of n-gram overlap (R-1 for unigrams; R-2 for bigrams and R-L for the longest common n-gram).",
"We report the mean for precision (P), recall (R) and F-measure (F).",
"METEOR (Denkowski and Lavie, 2014) is a machine translation metric based on exact, stem, synonym, and paraphrase matches between words and phrases.",
"BLEU (Papineni et al., 2002) is a text similarity metric that uses a modified form of precision and n-grams from candidate and reference texts.",
"Embeddings group several unsupervised approaches that produce a sentence-level vector representation: Universal Sentence Encoder (Cer et al., 2018) and InferSent (Conneau et al., 2017).",
"Search Results we use both question rewrites in Google Search and compare the overlap between the produced page ranks in terms of the standard IR metrics: Recall@ k for the topk links, Average Recall (AR) and Normalized Discounted Cumulative Gain (NDCG).",
"overlap of the web search results ( R@10 ).",
"The best metrics independent of QA are Universal Sentence Embedding ( USE ) and unigram recall ( ROUGE-1 R ).",
"We provide more details of the metrics performance illustrated with examples and the discussion in Appendix C. We use the set of all three best evaluation metrics to select the optimal QR model for our baseline approach.",
"We extend BERTserini (Yang et al., 2019), an efficient approach to open-domain QA, with a QR model to incorporate conversational context.",
"This approach consists of three stages: (1) QR, (2) PR and (3) RC.",
"First, a model is trained to generate a stand-alone question given a follow-up question and the preceding question-answer pairs.",
"In the second stage, PR, the topk relevant passages are retrieved from the index using BM25 using the rewritten question.",
"Finally, in RC, a model is trained to extract an answer span from a passage or predict if the passage is irrelevant.",
"The scores obtained from PR and RC are then combined as a weighted sum to produce the final score.",
"The span with the highest score is chosen as the final answer.",
"We evaluate a co-reference model and several generative models on the QR subtask using the question rewrites in QReCC and the set of QR metrics selected in Section 6.",
"The best performing model is then used in a combination with BERTserini to set the baseline results for the end-to-end QA task.",
"All our Transformer-based models were initialized with the pretrained weights of GPT-2 (English medium-size) (Radford et al., 2019) and further fine-tuned on question rewrites from the QReCC training set (see Appendix A.1).",
"AllenAI Coref is the state-of-the-art model for coreference resolution task (Lee et al., 2018).",
"We adapt it for QR with a heuristic that substitutes all coreference mentions with the corresponding antecedents from the cluster.",
"GECOR uses two bi-GRU encoders, one for user utterance and other for dialogue context, and a pointer-generator decoder previously proposed for task-oriented dialogues (Quan et al., 2019).",
"Generator + Multiple-choice model has a second head for the auxiliary classification task that distinguishes between the correct rewrite and several noisy rewrites as negative samples (inspired by TransferTransfo (Wolf et al., 2019b)).",
"CopyTransformer uses one of the attention heads of the Transformer as a pointer to copy tokens from the input sequence directly (Gehrmann et al., 2018).",
"Transformer++ model has two language modeling heads that produce separate vocabulary distributions, which are then combined via a parameterized weighted sum (the coefficients are produced by combining the output of the first attention head and the input embeddings).",
"We implemented BERTserini following Yang et al. (2019) We use the standard BM25 ranking for passage retrieval with k 1 = 0 .",
"82 , b = 0 .",
"68 , which was previously found to work well for passage retrieval on MS MARCO.",
"We then retrieve the top-100 relevant passages per question.",
"Afterwards, we use BERT-Large fine-tuned for the task of reading comprehension.",
"This model takes a question and each of the relevant passages as input and produces the answer span (Wolf et al., 2019a).",
"BERT-Large produces a score ( SBERT ), which is combined with the retrieval score for each of the passages ( S Anserini ) through simple linear interpolation: S = ( 1 ) S Anserini + SBERT We pick the span with the highest score S as the answer.",
"The parameter [ 0 , 1 ] was tuned using a 10% random subset of the QReCC training set withheld from the BERT-Large training (we found = 0 . 7 to work best).",
"BERT-Large was trained on human rewrites from the QReCC training set, and evaluated on the test set using either the original questions, human rewrites or the rewrites produced by Trans-former++.",
"The model is trained to either predict an answer span or predict that the passage does not contain an answer.",
"No answer for the question is predicted only when neither of the relevant passages predicts an answer span.",
"The model was trained on 480K paragraphs that contain the correct answers and 5K of other paragraphs as negative samples (see Appendix A.3 for more details).",
"We use the results of QR to select the best model and then use it for the end-to-end QA task.",
"Question rewrites are used as input for both passage retrieval and reading comprehension tasks.",
"The effectiveness of the QR component is compared with the end-to-end model conditioned on the conversational context.",
"We analyze the effectiveness of our QR models by doing a 5-fold cross validation and obtaining the best performing metrics.",
"Figure 3 contains 3 plots showing ROUGE 1-R, USE and R@10 across 5 turns.",
"We start with the second turn because the first turn always is a self-contained query.",
"The metrics across turns also stay stable with the same result for all the models.",
"The Transformer++ model is stable with little variance in terms of its maximum and minimum metric values across all the best performing metrics.",
"Our evaluation results are summarized in Table 4.",
"All generative models outperform the state-of-the-art coreference resolution model (AllenAI Coref).",
"We noticed that PointerGenerator which employs a bi-LSTM encoder with a copy and generate mechanism outperforms Generator using Transformer alone.",
"We could not find evidence that pretraining with an auxiliary regression task can improve the QR model effectiveness (Generator + Multiple-choice).",
"Use of two separate bi-GRU encoders for the query and conversation context further improved the QR effectiveness (GECOR).",
"Modeling both copying and generating the tokens from the input sequence employing the Transformer helped improve the effectiveness of the QR model (Copy-Transformer) compared to other existing generative models.",
"Finally, obtaining the final distribution by computing token probabilities and weighting question and context vocabulary distributions with those probabilities helped improve over the best performing generative model (Transformer++).",
"Table 5 shows the mean reciprocal rank (MRR), R@10, and R@100 of using the original, Trans-former++, and human rewritten questions.",
"R@ k is averaged across all questions.",
"For a question, if R@ k is 1.0, it means that there is a passage in the topk at any rank such that the passage is relevant; and 0.0 otherwise.",
"Table 6 shows the standard F1 and Exact Match metrics for extractive QA for Table 6: Mean F1 and Exact Match scores (%) on passages for extractive QA.",
"In the Known Context setting, we use the relevant passage from the web page indicated by the human annotator, i.e., without passage retrieval.",
"In the Extractive Upper Bound setting, we use a heuristic to find the answer span with the highest F1 score among the top-100 retrieved passages with human rewrite.",
"This setup indicates the best the reader can do given the retrieval results.",
"The upper bound on the answer span extraction (F1 = 75.45) highlights the need for more sophisticated QA techniques than the standard reading comprehension approaches can offer now.",
"Some answer texts in QReCC were paraphrased or summarised using multiple passages from the same web page.",
"Abstractive approaches to answer generation are necessary to close this gap.",
"Even using single document span extraction techniques, there is a large room for improvement.",
"Comparing Known Context to End-to-End we see losses introduced by the retrieval step, and comparing the Extractive Upper Bound to Known Context we see the sizeable margin of improvement available even for extractive models.",
"This shows that even with competitive baselines the QA tasks are all far from solved.",
"In both Table 5 and 6 we see that human rewritten questions more than double the effectiveness of using original questions.",
"In the absence of human rewritten questions, using Transfomer++ elevates the effectiveness of the QA tasks, getting it much closer to that proffered by human-level QR.",
"We introduced the QReCC dataset for open-domain conversational QA.",
"QReCC is the first dataset to cover all the subtasks relevant for conversational QA, which include question rewriting, passage retrieval and reading comprehension.",
"We also set the first end-to-end baseline results for QReCC by evaluating an open-domain QA model in combination with a QR model.",
"We presented a systematic comparison of existing automatic evaluation metrics on assessing the quality of question rewrites and show the metrics that best proxy human judgement.",
"Our empirical evaluation shows that QR provides an effective solution for resolving both ellipsis and co-reference that allows to use existing non-conversational QA models in a conversational dialogue setting.",
"Our end-to-end baselines achieve an F1 score of 19.10, well beneath the 75.45 extractive upper bound, suggesting not only room for improvement in extractive conversational QA, but that more sophisticated abstractive techniques are required to successfully solve QReCC."
] | [
"objective",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"result",
"result",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective"
] |
[
"Recent models for unsupervised representation learning of text have employed a number of techniques to improve contextual word representations but have put little focus on discourse-level representations.",
"We propose CONPONO 1 , an inter-sentence objective for pretraining language models that models discourse coherence and the distance between sentences.",
"Given an anchor sentence, our model is trained to predict the text k sentences away using a sampled-softmax objective where the candidates consist of neighboring sentences and sentences randomly sampled from the corpus.",
"On the discourse representation benchmark DiscoEval, our model improves over the previous state-of-the-art by up to 13% and on average 4% absolute across 7 tasks.",
"Our model is the same size as BERT-Base, but outperforms the much larger BERT-Large model and other more recent approaches that incorporate discourse.",
"We also show that CONPONO yields gains of 2%-6% absolute even for tasks that do not explicitly evaluate discourse: textual entailment (RTE), common sense reasoning (COPA) and reading comprehension (ReCoRD).",
"Pretraining large language models has become the primary method for learning representations from unsupervised text corpora.",
"Since the initial improvements demonstrated by ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), many alternative pretraining methods have been proposed to best leverage unlabeled data.",
"These methods include bi-directional language modeling (Peters et al., 2018), masked language models (Devlin et al., 2019), word order permutation (Yang et al., Work done during internship at Google. 1 Code is available at https://github.com/googleresearch/language/tree/master/language/conpono and https://github.com/daniter-cu/DiscoEval 2019), more robust training (Liu et al., 2019) and more efficient architectures (Lan et al., 2019).",
"However, little focus has been put on learning discourse coherence as part of the pretraining objective.",
"While discourse coherence has been of great interest in recent natural language processing literature (Chen et al., 2019; Nie et al., 2019; Xu et al., 2019), its benefits have been questioned for pretrained language models, some even opting to remove any sentence ordering objective (Liu et al., 2019).",
"However, in a recently published benchmark for evaluating discourse representations, Chen et al. (2019) found that the best performing model was surprisingly BERT, despite comparing against models specifically designed for discourse, such as DisSent (Nie et al., 2019) and a new recurrent network trained on a large range of sentence ordering objectives.",
"We show that combining transformer encoders with our inter-sentence coherence objective, we can further improve discourse-level representations in language models.",
"We present a model that trains a sentence-level encoder to capture discourse relationships between sentences, including ordering, distance and coherence.",
"The encoder is trained by using its output to predict spans of text that are some k sentences away from a context in either direction.",
"The predictions are made discriminatively with a sampled-softmax that contrasts the correct target sentence against negatives, including hard examples sampled from the same paragraph.",
"Our objective is inspired by the recently proposed Constrastive Predictive Coding (CPC) (van den Oord et al., 2018), but, among other differences, is applied on the sentence-level rather than the token-level and is bi-directional.",
"We call this the CONtrastive Position and Ordering with Negatives Objective (CONPONO ) 2 .",
"We evaluate our model on DiscoEval (Chen et al., 2019), a recently published benchmark for evaluating and probing for various aspects of discourse-level semantics in representations output by discourse models.",
"We observe that the representations learned with CONPONO outperform BERT-Large and achieve a new state-of-the-art despite using fewer parameters and training on the same data.",
"Furthermore, we show that our new objective improves model performance on other tasks including textual entailment, common-sense reasoning and reading comprehension.",
"We compare CONPONO against BERT-Base on RTE (Giampiccolo et al., 2007; Bentivogli et al., 2009), COPA (Roemmele et al., 2011) and ReCoRD (Zhang et al., 2018), while controlling for model size, training data and training time.",
"1. We describe a novel sentence-level discourse objective that is used in conjunction with a masked language model for unsupervised representation learning for text.",
"We show that this objective can leverage the cross-attention and pretrained weights of a transformer model to learn discourse-level representations.",
"2. We show that our model achieves a new state-of-the-art on DiscoEval, improving the results on 5 of the 7 tasks and increasing accuracy by up to 13% and an average of over 4% absolute across all tasks.",
"We also show 2%-6% absolute improvements over Bert-Base on RTE, COPA and ReCoRD as evidence that discourse pretraining can also improve model performance on textual entailment, commonsense reasoning and reading comprehension.",
"Figure 1 illustrates the CONPONO model.",
"The intuition is that if the model is able to accurately predict the surrounding target sentences given some anchor text, then the vector representations for these sentences should also be useful for downstream tasks.",
"The input to the model is a paragraph that is split into sentences.",
"A sentence is chosen at random as the anchor, and will be denoted as s i .",
"We encode s i with a transformer encoder to produce a vector c i .",
"The surrounding sentences are denoted as s i + k where k [ K .. 1 , 1 .. K ] , meaning the maximum distance we use is K .",
"We report results for K [1 .. 4] .",
"These sentences, s i + k , are encoded jointly with the anchor sentence.",
"We use just a single encoder g so all text is encoded with the same weights.",
"The encoded vectors are named t i + k because these are the target vectors the model tries to identify given the anchor and a target distance k .",
"Equation 1 defines t i + k and c i as a function g of the input sentences.",
"Note that the CONPONO g is different from the encoder in CPC because we input both the anchor and the target into the encoder, rather than separate anchor and target encoders.",
"Given the anchor and targets, we define a log-bilinear model in equation 2 to score the plausibility of target t i + k being in position k from anchor c i .",
"The full set of parameters for our model is for the encoder and a W k for each k .",
"CPC has the same bi-linear form as Equation 2 but the architecture for the encoders is different.",
"The loss for each k is given in equation 3 where the score for the correct target is contrasted to scores of random samples s j , sampled from both in-document and random sentences from the corpus, S .",
"To train CONPONO , we sample negative examples randomly from the corpus and from the same paragraph but different k as hard negatives.",
"Note that when | k | is greater than 1, there will be sentences between the anchor sentence and target sentence that will be purposely omitted from the input.",
"The missing context is intended to create a challenging objective where the model may not be able to rely on trivial signals that often appear in contiguous sentences.",
"For each example we encode two text spans, the anchor and the target.",
"There are three main options for encoding the two spans into c i and t i + k .",
"The simplest method, and most similar to CPC is to encode the anchor and target separately, which we call isolated encoding .",
"With this encoder, equation 1 will be t i + k = g ( s i + k ) .",
"The major drawback of this approach is that there is no token-level cross-attention between the anchor and the target, which has been shown to generally improve text encoding S i-2 S i-1 S i+1 S i+2 S i Encoder Encoder Encoder Encoder Encoder t i-2 t i-1 t i+1 t i+2 c i Predictions S r S r' Encoder Encoder t r t r' Random Negatives Figure 1: During training, a text segment is selected as the anchor ( S i ).",
"(Vaswani et al., 2017).",
"Cross-attention is the mechanism in neural networks that allows for attention to be shared between multiple inputs, in our case, two separate spans of text.",
"Alternatively, we can encode the anchor and target together and then dot product the latent vector with a learned vector representation for each distance k .",
"We call this approach a uni-encoder .",
"With this encoder, equation 2 will be f k ( s i + k , s i ) = exp( t Ti + k w k ) .",
"The class matrix W k in equation 2 is replaced by a class vector w k , which has fewer parameters.",
"This is similar to the ordering objectives in BERT and ALBERT where the pooled representation is used for a binary classification task and the learned vector representation for each distance k is just the softmax weights.",
"The potential drawback to this method is that each pair of sentences is represented by a single vector.",
"This encoder may learn a representation that is similar for all examples that have the same label but does not explicitly model the content of the input.",
"CONPONO implements the intersection of these two approaches.",
"The targets are concatenated to the anchor when encoded, to make use of the cross-attention of the transformer encoder.",
"The anchor, is encoded independently, though with the same weights.",
"This objective allows for more freedom in the values of c i and t i + k , unlike the uni-encoder .",
"Furthermore, since the encoder, g , can encode either one span ( s i ) or two spans ( s i , s i + k ), it can be used for downstream tasks that have either single (eg. SSP) or double (eg. BSO) span inputs.",
"There are different tasks that can be used for learning inter-sentence representations.",
"BERT (Devlin et al., 2019) included a next sentence prediction (NSP) task.",
"For NSP, two spans are fed into the model with the second span either being the next contiguous span of text from the source or 50% of the time it is replaced with a random span from the corpus.",
"The task is a binary classification of whether the two spans are from the same source.",
"ALBERT (Lan et al., 2019) compares the NSP approach to using no inter-sentence objective and to sentence order prediction, which for clarity we refer to as binary sentence ordering (BSO).",
"For BSO, the input is two spans that are always contiguous and from the same source but 50% of the time are in reverse order.",
"With CONPONO we capture the benefits of both learning ordering between coherent sentences and contrasting against random negatives.",
"We make the objective even more challenging by also predicting order on spans that are multiple sentences apart, and using other sentences from the same paragraph as harder negatives.",
"In practice, we use a 512 token input which is much larger than most two sentence pairs.",
"To train on longer sequence lengths, we use 4 sentences as the anchor and 3 sentences as the target segment.",
"We truncate longer sentences and pad tokens up to the sequence length as done for typical BERT input.",
"There is no overlap between the two segments and the k distance refers to the number of sentences omitted between the two segments.",
"For example, for a paragraph we may choose s 7",
"..s 10 as the anchor and s 1",
"..s 3 as the target for k = 4 because s 3 is 4 positions behind s 7 .",
"Since most paragraphs are not long enough to have many sentences in both directions of a 4 sentence anchor, we randomly select 4 of the 8 possible k targets for a given paragraph.",
"Because of the random sampling, we oversample shorter distances because they occur more consistently in the data.",
"We train with 32 input sentences, where 1 is the correct target, 3 are hard negatives from the same document and 28 are random sentences from other documents.",
"For fair comparison, we train on the same data as BERT, using only Wikipedia and BooksCorpus (Zhu et al., 2015).",
"We initialize our model with BERT-Base weights and train until the model has seen one-fourth as many segment pairs as the original BERT model ( 32M total), so the total compute and iterations of training are not significantly greater than BERT-Base.",
"We also use a masked language model objective similar to BERT but dynamically mask during training for different masks each epoch.",
"When jointly encoding two inputs, we concatenate the input tokens and separate the two spans with a [SEP] token to mimic the BERT format.",
"We evaluate our model on the DiscoEval benchmark (Chen et al., 2019) and on the RTE (Giampic-colo et al., 2007; Bentivogli et al., 2009), COPA (Roemmele et al., 2011) and ReCoRD (Zhang et al., 2018) datasets.",
"We chose the DiscoEval benchmark because it is intended to evaluate a model's ability to represent the role of a sentence in its discourse context.",
"We also report results on RTE, COPA and ReCoRD because these tasks have a discourse or sentence ordering aspect to them but are not exclusively designed for discourse evaluation.",
"Tasks: DiscoEval (Chen et al., 2019) is a suite of tasks designed to evaluate discourse-related knowledge in pretrained sentence representations.",
"The benchmark is composed of seven tasks; four based on sentence ordering or coherence (Sentence position (SP), Binary sentence ordering (BSO), Dis-cource coherence (DC) and Sentence section prediction (SSP)) and three that are based on classifying the type of relationship between a pair of text sequences (Penn Discourse Tree Bank Explicit and Implicit (PDTB-E/I) and Rhetorical structure theory (RST)).",
"PDTB (Prasad et al., 2008) and RST (Carlson et al., 2001) are human annotated datasets.",
"Both are multi-class classification tasks where PDTB is classifying a pair of sentences whereas RST is predicting the class of a node in a document-level discourse tree.",
"Both classes of tasks are critical aspects of understanding discourse.",
"Baselines: The previously best overall performing model from DiscoEval (Chen et al., 2019) was BERT-Large (Devlin et al., 2019).",
"We also include the results for BERT-Base because our model is most comparable to BERT-Base in terms of parameter size, training data and training compute.",
"We also evaluate RoBERTa-Base (Liu et al., 2019) because it was trained on more data, reported improvements over BERT-Base on other tasks but dropped the next sentence prediction objective entirely.",
"We also compare against a BERT-Base model which we trained with binary sentence ordering (BERT-Base BSO) because this objective has been shown to be more useful than next sentence prediction (Lan et al., 2019).",
"This BERT-Base BSO model was initialized with BERT weights and trained on the same data but only on contiguous spans of text where 50% of the time we switch the order.",
"This model and CONPONO are initialized from the same weights and trained on the same number of segment pairs so that the two models can be compared fairly.",
"In Section 2.1 we describe different encoding approaches for generating the sentence-level representations.",
"We report results from versions of CONPONO using each of these encoding approaches, labeled isolated to represent separate encoding and uni-encoder to represent joint encoding of the anchor and target without a separate anchor encoding.",
"The final line in Table 1 is the combined approach that we describe in Section",
"2. Modeling DiscoEval We reuse the code from DiscoEval and generally maintain the same process for collecting our results on the benchmark, such as freezing all weights and only training a logistic regression or one layer perceptron on top of the sentence encodings.",
"Note that since we are only interested in the vector representations of the input, we drop the weight matrix W k and only use the output of the encoder.",
"We omit the details for Model SP BSO DC SSP PDTB-E PDTB-I RST-DT avg.",
"the encoding logic for each task since that is explained in detail in Chen et al. (2019).",
"Here we only mention our deviations from the Chen et al. (2019) methodology.",
"The most salient difference is that we only use the pooled representation from our model rather than the average from multiple layers of the model for the SP, BSO and DC tasks.",
"For encoding individual tasks we prefer to encode pairs of sentences together.",
"For SP we encode the first sentence concatenated with every other sentence instead of taking the point-wise difference and concatenate the 5 vectors.",
"For BSO we also encode the two sentences together instead of separately.",
"For DC we split the paragraph into pairs of sentences and encode those together.",
"We concatenate the 3 output vectors.",
"For RST instead of embedding each sentence and doing a mean of all the sentences in a subtree, we simply concatenate those sentences and encode them all together as a single text span.",
"Any text segments longer than 512 tokens are truncated from the end.",
"Results: Table 1 shows that our model outperforms the previous state-of-the-art accuracy on DiscoEval overall.",
"Our model excels in particular on the sentence ordering and coherence tasks (SP, BSO, and DC).",
"Note that our model parameter count is the same as BERT-Base but it outperforms BERT-Large, which has significantly more parameters and has used much more compute for pretraining.",
"From the discussion in Section 2.2, BERT represents using the NSP objective and we train BERT-Base BSO to compare NSP, BSO and CONPONO directly.",
"BERT-Base BSO scores tend to fall between those of BERT-Base and our model, implying that the sentence ordering objective is improving the models for this benchmark, but that binary sentence ordering is not sufficient to capture the added benefits of including more fine-grained ordering and negative examples.",
"We observe that CONPONO outperforms both the isolated encoding and uni-encoding approaches.",
"CONPONO isolated preforms significantly worse than both other approaches, suggesting that cross-attention between the anchor and the target is critical to learning stronger discourse representations.",
"CONPONO uni-encoder results are closer to our combined encoding approach but still fall short on every task.",
"This empirical result suggests that the separate encoding of the anchor during pretraining is important despite the fact that theoretically CONPONO could trivially reduce to the uni-coder representation by ignoring c i .",
"Tasks: DiscoEval was specifically designed to evaluate model performance on discourse tasks but there are many other benchmarks that could also benefit from pretraining for improved discourse coherence.",
"We evaluate our model on three such tasks, Recognizing Textual Entailment (RTE) (Giampic-colo et al., 2007; Bentivogli et al., 2009), Corpus of Plausible Alternatives (COPA) (Roemmele et al., 2011) and Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) (Zhang et al., 2018).",
"We report accuracy on the validation set provided by each dataset.",
"Each example in RTE is a pair of sentences.",
"The model must classify whether or not the second sentence entails the first.",
"Examples in COPA are composed of a single context sentence followed by two candidate sentences that are either a cause or effect of the context sentence.",
"The model must select the Context Completions ReCoRD ...",
"most plausible sentence of the two.",
"Lastly, an example in ReCoRD is a paragraph from a news article, followed by several bullet points and with all the entities marked.",
"The model is given a single sentence from later in the document with a single entity masked out and must select the entity from the context that fills the blank.",
"Table 2 shows examples of each with correct choices in bold.",
"Baselines: We compare our model against BERT-Base because this is the closest model in terms of parameter size and training data.",
"However, since our model is initialized with BERT-Base weights, we also report results from BERT-Base BSO because it was trained on the same number of text examples as CONPONO .",
"We also compare against BERT-Large to contrast to a much larger language model.",
"We provide results from Albert (Lan et al., 2019) when available to provide a state-of-the-art baseline that may have used more data, compute and parameters.",
"The purpose of these results is not to compare against the current state-of-the-art but rather to better understand the improvements that can be found from adding a discourse coherence objective to BERT-Base without significantly increasing the model size or training data.",
"Results: We believe that the coherence and ordering aspects of these evaluation tasks are well fit to demonstrate the how our model can improve on strong baselines such as BERT-Base.",
"Table 3 shows that our model achieves accuracies on RTE and COPA comparable to BERT-Large while having the same number of parameters as BERT-Base.",
"Interestingly, we observe improvements over the baseline with BERT-Base BSO, showing that even Model RTE COPA BERT-Base 66.4 62.0 BERT-Base BSO 71.1 67.0 CONPONO 70.0 69.0 BERT-Large 70.4 69.0 ALBERT 86.6 Table 3: Our model improves accuracy over BERT-Base for RTE and COPA benchmarks.",
"simple discourse-level objectives could lead to noticeable downstream effects.",
"Though these improvements are modest compared to BERT-Large, they are meant to highlight that our model does not only improve on results for artificial sentence ordering tasks, but also on aspects of benchmarks used to generally evaluate pretrained language models and language understanding.",
"The task for the ReCoRD dataset is to select the correct entity from those that appear in the context to fill in the blank in the target.",
"Previous models for ReCoRD have used a similar structure to SQuAD (Rajpurkar et al., 2016) where the model outputs a vector for each token and the model learns the best start and end position of the answer span based on the softmax over all the tokens.",
"We, instead, generate all possible target sentences by filling the blank with each marked entity and discriminatively choose the sentence most likely to be the true plausible sentence from the context.",
"This modified task evaluates how our model compares to BERT-Base choosing the most coherent sentence from a set of nearly identical sentences.",
"In Table 4 we show that CONPONO does achieve a boost over BERT-Base but is still well below BERT-Large exact match score on the harder task of selecting the entities in context.",
"The strong results from BERT-Large imply that having a better representation of the text with a large model is able to subsume any improvement from learning plausible contexts for this task.",
"There are three aspects of our modeling choices that warrant a deeper understanding of their importance to the model:",
"Window size: We ablate the 4 window sizes (ie. choices of k).",
"k = 1 is effectively binary sentence ordering with negative samples.",
"Masked Language Model Objective: We remove the MLM objective allowing the model to optimize only the CONPONO objective without maintaining a good token level representation.",
"Model size: We train a smaller model that is also initialized with pretrained weights.",
"To measure the effects of each of these design decisions, we report DiscoEval scores for each model as well as accuracy on the CONPONO classification task on a held-out set of examples.",
"This is to show how well the model is optimized as well as how well it performs on downstream tasks.",
"Table 5 shows the results on DiscoEval with our model and several key ablations.",
"We observe that using a window size for our objective that is larger than 1 is key to seeing downstream improvements.",
"We believe that this is due to the objective being harder for the model because there is more variation farther from the anchor.",
"At the same time, increasing the window size beyond 2 seems to result in similar performance.",
"This may be because larger distances from the anchor also lead to more ambiguity.",
"We see this reflected in the held-out classification accuracy being lower for examples with larger distance labels in Figure",
"2. We also note that keeping the masked language model objective during pretraining also improves downstream performance.",
"In Figure 2 we see that classification accuracy is consistently lower with the MLM objective compared to without.",
"This is expected because during inference, many key terms may be masked out, making the task harder.",
"However, keeping this objective during pretraining maintains a good token-level representation that is necessary for downstream tasks.",
"Lastly, we try training a smaller version of our model, with only 2 hidden layers, and a 512 intermediate size.",
"The smaller model is able to train much faster, allowing us to train on many more examples and new data.",
"However, we are unable to achieve similar results despite training on 24 times more examples, and including CCNews (Liu et al., 2019), a larger and higher quality data source.",
"To glean some insight into how CONPONO representations may differ from BERT-Base representations, we look at the occurrence of discourse markers in the BSO-Wikipedia task of DiscoEval.",
"We choose this task because it is a simple binary classification task that has only 2 sentences as input and the domain is similar to the pre-training data.",
"We look at the usage of discourse markers identified by Nie et al. (2017); but, when, if, before, because, while, though, after, so, although, then, also, still .",
"3 We extract examples from the test set on which CONPONO output the correct label and BERT-Base output the incorrect label and visa versa.",
"For each set of examples, we measure the change in the occurrence of discourse markers relative to the training data counts.",
"Since some markers are much more common than others, we take the weighted average of the change in appearance rate, where the weights are the training data counts of each marker.",
"3 We omit and and as because they are very common in this corpus but often are not used as connectives between the two candidate sentences for the BSO task.",
"We find that in the set of examples that CONPONO classified correctly, the rate of discourse makers was 15% higher than in the training corpus.",
"This is in contrast to 11% higher among the examples that BERT classified correctly.",
"The standard deviation for random samples of the same size was about 1%.",
"This suggests that both BERT and CONPONO are relying heavily on discourse markers to solve the BSO-Wikipedia task.",
"While it is expected for shallow discourse markers to be strong features for sentence ordering, we expect CONPONO to also incorporate deeper features, such as anaphora, due to its pretraining objective.",
"One indication of CONPONO relying on alternative features than BERT-Base is that there was a 12% relative increase in discourse markers in the CONPONO set when counting markers only in the first sentence whereas an 8% relative increase in the BERT set when counting markers only in the second sentences.",
"The difference in the location of the discourse markers in the two sets of examples suggests that CONPONO and BERT utilize those features differently and that CONPONO may be less likely to incorrectly classify examples that use discourse markers in the first sentence of a BSO example.",
"Manually inspecting a sample of examples hints that there are often strong corefer-ences between the two input sentences that indicate the ordering.",
"Table 6 shows two examples from the CONPONO correct set which is drawn from the BSO-Wikipedia test data.",
"In both examples, the discourse marker appears in the first sentence but the second sentence contains anaphora referring to an antecedent in the first sentence.",
"Some of the largest improvements on benchmarks such as GLUE (Wang et al., 2018) have come from ELMO's large scale bi-directional language modeling (Peters et al., 2018), BERT's masked language models (Devlin et al., 2019), XLNET's generalized autoregressive pretraining (Yang et al., 2019), RoBERTa's robust training (Liu et al., 2019) and ALBERT's parameter reduction techniques (Lan et al., 2019).",
"As discussed in Section 2.2, most language model were limited to NSP or BSO for inter-sentence representation learning.",
"We showed that by comparing to BERT, which uses NSP and BERT-Base BSO which we train with the BSO objective that our objective is able to improve the discourse-level representations by training on more fine-grained sentence ordering, non-contiguous In 1941 [1]Vaughn joined the United States National Guard for what had been planned as a one-year assignment , but when [2]World War II broke out , he was sent abroad until the war ended in 1945 .",
"neighboring sentences and contrasting against random negatives.",
"Early approaches to sentence representation, such as Skip-Thought Vectors (Kiros et al., 2015), mimicked word embedding methods in addition to left-to-right language modeling to use unlabeled data to learn sentence level representations.",
"DisSent (Nie et al., 2019) focused more on collecting data that could be used to train a supervised classification model on pairs of sentences.",
"These and other innovations in sentence representation lead to the creation of more evaluations for discourse and coherence representation (Chen et al., 2019; Xu et al., 2019).",
"Like other unsupervised representation learning models, CONPONO is trained to generate a latent variable that encodes inter-sentence relationship and discourse coherence.",
"Our objective is inspired by the Contrastive Predictive Coding (CPC) objective (van den Oord et al., 2018).",
"CPC was originally designed to be a universal unsupervised learning approach to extract useful representations from high-dimensional data and was previously implemented on the token-level for text models.",
"We utilize the k-distance predictions of CPC because it naturally captures discourse and sentence ordering properties when applied on the sentence-level.",
"Furthermore, by combining our objective with a transformer encoder, our model is able to benefit from cross-attention between the anchor and the target sentences, which we show outperforms encoding the anchor and target separately, as implemented in CPC.",
"In Section 3.3 we show that the cross-attention is an important factor in learning a good representation for downstream tasks and effectively optimizing our inter-sentence objective.",
"In this paper we present a novel approach to encoding discourse and fine-grained sentence ordering in text with an inter-sentence objective.",
"We achieve a new state-of-the-art on the DiscoEval benchmark and outperform BERT-Large with a model that has the same number of parameters as BERT-Base.",
"We also observe that, on DiscoEval, our model benefits the most on ordering tasks rather than discourse relation classification tasks.",
"In future work, we hope to better understand how a discourse model can also learn fine-grained relationship types between sentences from unlabeled data.",
"Our ablation analysis shows that the key architectural aspects of our model are cross attention, an auxiliary MLM objective and a window size that is two or greater.",
"Future work should explore the extent to which our model could further benefit from initializing with stronger models and what computational challenges may arise.",
"We wish to thank the Stanford NLP group for their feedback.",
"We gratefully acknowledge support of the DARPA Communicating with Computers (CwC) program under ARO prime contract no.",
"W911NF15-1-0462 References Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini."
] | [
"abstain",
"objective",
"objective",
"result",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Automating the assessment of learner summaries provides a useful tool for assessing learner reading comprehension.",
"We present a summarization task for evaluating nonnative reading comprehension and propose three novel approaches to automatically assess the learner summaries.",
"We evaluate our models on two datasets we created and show that our models outperform traditional approaches that rely on exact word match on this task.",
"Our best model produces quality assessments close to professional examiners.",
"Summarization is a well-established method of measuring reading proficiency in traditional English as a second or other language (ESOL) assessment.",
"It is considered an effective approach to test both cognitive and contextual dimensions of reading (Weir et al., 2013).",
"However, due to the high time and cost demands of manual summary assessment, modern English exams usually replace the summarization task with multiple choice or short answer questions that are easier to score (Alder-son, 2005).",
"Automating the assessment of learner summarization skills provides an efficient evaluation method for the quality of the learner summary and can lead to effective educational applications to enhance reading comprehension tasks.",
"In this paper, we present a summarization task for evaluating non-native reading comprehension and propose three novel machine learning approaches to assessing learner summaries.",
"First, we extract features to measure the content similarity between the reading passage and the summary.",
"Secondly, we calculate a similarity matrix based on sentence-to-sentence similarity between The work by the first author was done at the University of Cambridge prior to joining Amazon Research.",
"the reading passage and the summary, and apply a Convolutional Neural Network (CNN) model to assess the summary quality using the similarity matrix.",
"Thirdly, we build an end-to-end summarization assessment model using the Long Short Term Memory (LSTM) model.",
"Finally, we combine the three approaches in a single system using a simple parallel ensemble modeling technique.",
"We compiled two datasets to evaluate our models, and we release this data with the paper.",
"We show that our models outperform traditional approaches that rely on exact word match on the task and that our best model produces quality assessments close to professional examiners.",
"Most of the previous studies on summary assessment are aimed at evaluating automated summarization systems (Lin, 2004; Lin and Hovy, 2003; Nenkova et al., 2007).",
"In contrast to this line of work, our goal is to assess human-written summaries rather than machine-generated ones.",
"Within the educational domain, several applications have been developed to help students with their writing summarization skills.",
"Summary Street (Wade-Stein and Kintsch, 2004) is an educational software designed for children to develop summarization skills.",
"It asks students to write a summary to a reading passage, and scores the summary by using Latent Semantic Analysis (LSA) to construct semantic representations of the text.",
"This system uses the cosine similarity score based on LSA as the sole indicator of content similarity.",
"OpenEssayist (Whitelock et al., 2013) is an interactive system that provides students with the automated feedback about their summative essays.",
"The system extracts the key words, phrases and sentences from the essays and helps the students to investigate their distribution in text and the potential implications for the clarity of the narrative.",
"The work that is most similar to ours is the automatic scoring of a summarization task by Madnani et al. (2013), who designed a task to measure the reading comprehension skills of American students.",
"In their experiments, students were asked to write a four-sentence summary for each of the two three-paragraph reading passages, with the first sentence summarizing the whole passage and the following three sentences summarizing each paragraph.",
"To build an automated system to score the summaries, they randomly select a student summary with the highest score as the reference, and use 8 feature types to train a logistic regression classifier to predict the summary score.",
"They train a separate classifier for each of the two passages, and obtain accuracy scores of 65% and 52% respectively, outperforming the most-frequent-score baselines of 51% and 32% .",
"Most of the features used in Madnani et al. (2013) are based on verbatim overlap.",
"Although such metrics prove to be effective in various tasks, they cannot capture the content similarity when paraphrasing or a higher level of abstraction are used in the summary.",
"Few studies have addressed summarization assessment at a higher level.",
"More recently, Ruseti et al. (2018) have used Recurrent Neural Networks (RNNs) to automatically score summaries.",
"In their model, a concatenated representation of the summary and the text built from two separate RNNs as well as a complexity score of the text are fed to a fully connected layer to predict a real number between [0 , 1] .",
"This number is then mapped to 3 discrete classes representing the quality of the summary using linear regression.",
"Their best model achieves 55 .",
"24% in accuracy on a dataset of 636 summaries collected using Mechanical Turk.",
"In this paper, we address several limitations of previous work.",
"We build a system that uses verbatim features as well as features capturing higher level of abstraction.",
"First, we aim to build a generic system that can evaluate the quality of a summary without having to train a separate model for each text.",
"Second, whereas Madnani et al. (2013) use a student summary with the highest score as the reference to evaluate the candidate summary, our goal is to build a fully-automated system that does not require selecting a pre-defined reference.",
"Third, we aim to explore features and structures capable of better modeling semantic similarity beyond verbatim overlap.",
"This section outlines the summarization task used in our experiments.",
"First, learners, regardless of their proficiency level, were asked to read three reading passages extracted from the Cambridge English Exams dataset of Xia et al. (2016) at the lower (B1), upper intermediate (B2), and advanced (C1) levels of the Common European Framework of Reference for Languages (CEFR).",
"Then they were asked to write a summary of 50 , 100 , and 120 words for each of the three tasks.",
"1 3.1 Pilot study with simulated learner data Before launching the experiments with the actual language learners and in order to develop the automated summary evaluation system, we first ran a pilot study and collected simulated learner summaries from 50 members of our university.",
"Since most participants of this study would speak English at an advanced (C1-C2) level, we asked them to write a good summary and a bad summary for each reading passage, mimicking a learner.",
"The participants were asked to produce grammatically correct sentences and to write a bad summary in a way that a learner who does not fully understand the original passage might produce.",
"In total, 300 summaries were collected (with 150 good summaries and 150 bad ones).",
"The simulated learner data was then used to train binary classification systems to assess whether a summary captures the passage content properly or not.",
"Next, we collected summaries from second language learners at B1, B2 and C1-C2 levels of proficiency.",
"2 In total, 411 summaries from 137 learners were collected.",
"The distribution of the learner proficiency levels is shown in Table",
"1. 1 The word limits on the summarization tasks are set to keep a relatively constant compression ratio between the summary and the length of the original passage.",
"2 The proficiency levels of learners were self-identified.",
"All content included is accurate, with no irrelevant details or repetitions.",
"Target reader is fully informed.",
"Band 4: Performance shares features of Bands 3 and 5.",
"Band 3: The summary demonstrates acceptable understanding of the passage: Most of the main points are included.",
"Most of the content is relevant and paraphrased, with some irrelevant details, repetitions or inaccuracy of content.",
"Target reader is on the whole informed.",
"Band 1: The summary demonstrates very little understanding of the passage: Most of the content is of limited relevance, with repetitions or verbatim borrowing from the original text.",
"In some paraphrased parts of the text, inaccuracy of content or omissions of main points are evident.",
"Target reader is minimally informed.",
"Band 0: No understanding of the passage is demonstrated.",
"The content is totally irrelevant to the original passage.",
"Target reader is not informed.",
"Figure 1 shows the distribution of the scores for the learner summaries.",
"The pairwise correlation between annotators ranges between 0 .",
"690 and 0 .",
"794 .",
"To derive the final score for each summary, we take the average of the scores by the three annotators.",
"This results in a set of real-valued average scores on the scale of [0 , 5] and allows us to treat this task as a regression problem and make use of the continuity of the assessment scale.",
"The goal of the experiments on this data is then to train a regression model to predict a score that correlates well with the annotators' judgments.",
"In this section, we introduce three different approaches to the automated evaluation of the learner summaries.",
"First of all, we extract a number of features to describe the similarity of the summary and the reading text and apply a machine learning model to predict the summary quality.",
"The summarization task for reading comprehension examines the content relevance and the ability to convey the main ideas of the text in the summary.",
"To automatically assess the learner summary, we compare the candidate summary against a reference to assess the quality of its content.",
"We experiment with two types of references to evaluate the candidate summary: firstly, we compare the candidate summary against the original passage directly, and secondly, we extract key sentences from the original text with an automated extractive summarizer and compare the candidate summary to the set of key sentences.",
"Ideally, an extractive summarizer extracts a subset of sentences from the passage that are highly representative of the original text.",
"Although the extracted key sentences are not necessarily coherent among themselves, they provide a representation of the main ideas of the text.",
"Comparing the candidate summary against the key sentences allows us to examine the content relevance and the coverage of the main ideas in the candidate summary.",
"We compare two popular summarizers in selecting the key sentences for reference: TextRank (Mihalcea and Tarau, 2004) and MEAD (Radev et al., 2004).",
"We also compare the extractive summarizers against the baseline of using a random selection of sentences as the reference.",
"After obtaining the reference, we derive four types of linguistic features to evaluate the quality of the learner summary: (1) verbatim features, (2) semantic similarity features, (3) features based on distributed vector representations of the summary, and (4) features that describe discourse and other textual characteristics of the summary.",
"Verbatim similarity is the most straightforward measure that indicates content similarity.",
"Verbatim features measure the lexical overlap of the text units between the candidate summary and the reference.",
"We use the following metrics to measure verbatim similarity: ROUGE (Lin, 2004), BLEU (Papineni et al., 2002), and METEOR (Denkowski and Lavie, 2011).",
"The three metrics are commonly used to assess automated summarization systems.",
"ROUGE and BLEU are based on exact word match of N-grams, and METEOR extends the exact word match with stem, synonym, and paraphrase matches extracted from the Word-Net (Miller, 1995) and a background dictionary, which allows for more flexible expressions.",
"Although verbatim overlap metrics prove to be effective in various tasks, they fail to capture the content similarity when paraphrasing and higher levels of abstraction are used in the summary.",
"To compensate for this, word embeddings and sentence embeddings are used to model semantic similarity at the word and the sentence level.",
"We measure the semantic similarity between words and sentences in the texts and combine the scores into a measure of document-level semantic similarity.",
"1. Word similarity : Word2vec (Mikolov et al., 2013) is a model for learning distributed vector representations of words from a large corpus of text.",
"We use embeddings pre-trained on Wikipedia to compute word-to-word cosine similarity between the candidate summary and the reference.",
"We experiment with three scoring functions to construct the text-level semantic similarity measures from the word-to-word scores: (1) average word similarity on every word pair in the candidate summary and the reference; (2) a greedy method (Mihalcea et al., 2006) that finds the best-matching word with maximum similarity scores and computes the average over the greedily selected pairs; (3) optimal matching (Rus and Lintean, 2012) that finds the optimal alignment of word pairs and then takes the average over the alignment.",
"2. Sentence similarity : Skip-thought (Kiros et al., 2015) is a model for learning distributed representations of sentences.",
"It uses an RNN-encoder to compose the sentence vector, and a decoder conditioned on the resulting vector that tries to predict the previous and the next sentences in the context.",
"We use the model pre-trained on the BookCorpus (Zhu et al., 2015) to generate our sentence vectors.",
"Additionally, we experiment with composing the sentence vectors using word embedding summation and taking the average ( average word embeddings ).",
"We use the same functions for word-level similarity to compute the text semantic similarity from the sentence vectors.",
"In addition to the word and sentence similarities, we investigate methods to model the content similarity between the candidate summary and the reference directly at the document level.",
"learner summaries: TF-IDF is a common method to construct document representations in information retrieval.",
"TF-IDF weighted document vectors are frequently used for measuring query-document similarity.",
"Doc2Vec (Le and Mikolov, 2014) is a neural network model for learning distributed representation of documents.",
"We use the distributed memory of paragraph vectors (PV-DM) variant of the model to construct our vector representation of the summary.",
"The PV-DM model maps the document to a vector space and uses a combination of the document vector and the vectors of surrounding words to predict a target word.",
"Latent Semantic Analysis (LSA) (Landauer, 2006) applies singular value decomposition (SVD) on the term-document matrix to obtain vector space representation of documents.",
"Latent Dirichelet Allocation (LDA) (Blei et al., 2003) represents the documents as mixtures of topics.",
"It can be used to measure the content similarity and topical relevance of documents.",
"We use the Simple English Wikipedia corpus 3 as our background resource to learn the document representations.",
"The Simple English Wikipedia data is used to train the models because its documents are rendered simple for English learners.",
"Therefore, the lexical usage and syntactic structure in Simple English Wikipedia are more similar to the summaries written by learners.",
"We take 3 https://simple.wikipedia.org the cosine similarity between the candidate and the reference vectors to evaluate their similarity.",
"Apart from the content-based measures of the summary, the textual quality of the summary is also important for its overall quality estimation.",
"For instance, good summaries tend to be more coherent and logically consistent.",
"We extract lexical chain -based discourse measures to assess the coherence of the text.",
"Lexical chains model the semantic relations among entities throughout the text.",
"We implement the lexical chaining algorithm developed by Galley and McKeown (2003) and extract 7 lexical chain-based features.",
"4 We also measure the following superficial textual features: Length : Number of words in the summary.",
"Compression ratio : The ratio of the number of words in the summary to the number of words in the reading passage.",
"Type-token ratio : The ratio of the number of unique words to the total number of words in the summary.",
"Text readability : The reading difficulty (the CEFR level) of the passage to be summarized.",
"After the features are extracted, we train a Support Vector Machine (SVM) (Cortes and Vapnik, 1995) model for the classification task (Section 3.1) and a Kernel Ridge Regression (KRR) (Saun-ders et al., 1998) model for the regression task (Section 3.2).",
"Secondly, we construct a sentence similarity matrix between the candidate summary and the original reading passage and apply a Convolutional Neural Network (CNN) model on the similarity matrix to predict the quality of the summary.",
"Lemaire et al. (2005) proposed a computational cognitive model for assessing extractive summarization.",
"In their experiments, they presented 278 American school students with two reading passages and asked them to underline three to five sentences that they considered the most important in the texts.",
"The underlined sentences were compared against the set of all the sentences from the 4 Features include: number of lexical chains per document, number of lexical chains normalized by text length, average/maximum lexical chain length, average/maximum lexical chain span, and the number of long chains.",
"(b) The similarity matrix of a bad Summary B Figure 2: Similarity matrices of two summaries for the same reading passage from the simulated learner data.",
"original passage.",
"They observed that the important sentences selected by the students are highly connected to the rest of the sentences in the text, where the connection is defined by the semantic similarity of the sentences.",
"Based on their observations, we hypothesize that sentences in a good summary should have a well-distributed connection with as many sentences as possible in the original text, because a good summary is supposed to cover all the important information in the text.",
"In contrast, sentences in a bad summary may fail to form a well-distributed connection with sentences in the original text.",
"For example, if a bad summary only captures a few of the main points in the original text, then the sentences in such a summary would be connected only to the sentences where these points are mentioned in the original text, lacking the connections to the rest of the text.",
"If a bad summary is generally irrelevant to the original text, sentences in such a summary would be minimally connected to most of the sentences in the original text.",
"Beside these extreme cases on summary quality scale, summaries of intermediate quality may display patterns of connection to the original passage that share the characteristics of the good summary and the bad summary to various degrees.",
"Following this idea, we construct a sentence similarity matrix between the candidate summary and the original text.",
"Each element of the matrix is a cosine similarity score between the vector representations of a sentence from the summary and a Figure 3: The merged LSTM model sentence from the original text.",
"We use the two sentence similarity models described in Section 4.1.2, skip-thought and average word embeddings, to build the sentence vectors.",
"According to our hypothesis, the quality of the summary corresponds to different patterns in the similarity matrix.",
"The similarity matrix can be viewed as a heat map image from which we can learn patterns to detect the quality of the summary.",
"Figure 2 demonstrates the similarity matrices of two summaries for the same reading passage from the simulated learner data.",
"The shade of the coloured map indicates the degree of similarity between two sentences: the darker the shade is, the more similar the sentences are.",
"In this example, Summary A is an example of a good summary, and Summary B is an example of a bad summary.",
"We can see that sentences in Summary A are similar to a number of sentences in the original text, resulting in a well-distributed heat map.",
"By contrast, sentences in Summary B are similar to five particular sentences in the text and are less similar to other sentences, which is reflected by the isolated dark patches in the image.",
"On the whole, Summary A has higher similarity scores than Summary B, which makes its heat map darker.",
"These two examples illustrate how different patterns may be observed in the heat map of the summaries of different quality.",
"To learn these patterns automatically, we apply a CNN model on the similarity matrix to predict the quality of the summary.",
"However, it should be noted that CNNs usually work best when a large amount of training data is available, whereas the summary data we have collected represents a relatively small dataset.",
"We compare the results of the CNN model against the feature extraction approach to investigate how well the model can learn from the limited amount of data.",
"Thirdly, we experiment with several LSTM-based neural network models for assessing the summary quality.",
"The LSTM-based models are used to learn representations of the summary and estimate its quality automatically, without having to manually extract features from it.",
"Recurrent neural networks with LSTM units (Hochreiter and Schmidhuber, 1997) have shown impressive results on various NLP tasks (Wang and Jiang, 2016; Rocktaschel et al., 2015).",
"In essence, they are capable of embedding long text sequences into a vector representation which can later be decoded for use in various applications.",
"Inspired by the recent advances with LSTMs in NLP tasks, we propose a merged LSTM model (see Figure 3) for assessing learner summaries.",
"The merged LSTM model encodes the summary and the reading text separately with two bidirectional LSTMs, and merges the embedded summary and embedded reading text representations into a joint representation to predict the summary score.",
"We explore four functions to merge the encoded vectors, including concatenation , addition , dot product and linear combination .",
"As the merged LSTM model encodes the summary and reading text separately, it needs to propagate dependencies over long sequences to compare the summary and the text.",
"The joint representation obtained in the merged LSTM model cannot fully capture the connection between the summary and the text.",
"In this section, we propose an attention-based LSTM model which makes use of an attention mechanism to better model the relation between the summary and the reading text.",
"In general, the attention model learns a soft alignment between the input and the output in the encoder-decoder framework.",
"The attention mechanism allows the model to learn what to attend to in the input states and mitigates the long-dependency bottleneck of the LSTM.",
"In the attention-based model for summary assessment, the original text and the summary are still encoded using two separate LSTMs.",
"However, the text representation is formed by a weighted sum of the hidden states of the text encoder, where the weights can be interpreted as the degree to which the summary attends to a particular token in the text.",
"The summary representation and the text representation are combined with a nonlinear function into a joint representation and then fed into the fully connected layer to predict the summary quality.",
"Figure 4 is an illustration of the attention mechanism between the embedded summary and the embedded input text.",
"Mathematically, suppose s is the encoded summary vector and a ( t ) is the hidden state of the LSTM for the text at each token t .",
"Then the final representation r of the encoded text is a weighted sum of a ( t ) : r = a w = T (cid:88) t =1 a ( t ) w ( t ) The weight for each token w ( t ) is computed by: w ( t ) = exp ( ( t )) (cid:80) Tt =1 exp ( ( t )) where ( t ) = W a a ( t ) + W s s Figure 5: Combining three approaches using ensemble modelling is an alignment model.",
"The joint representation m of the summary and the text is a combination of the summary vector s and the weighted input text vector r .",
"where W sm , W rm and b are the parameters of a linear combination function.",
"Ensemble modelling combines several machine learning techniques into one model in order to improve the stability and accuracy of the prediction.",
"We explore combining the three different models (see Figure 5) into a single model by taking the majority vote from the binary classification models and taking the average value of the predicted scores from the regression models.",
"We compare the performance of the combined models against the individual models to investigate if and to what extent ensemble modelling is useful for assessing the summaries.",
"We evaluate our models on the real learner data and on the simulated learner data, for consistency, using 5-fold cross validation.",
"In each fold, 60% of the data is used as the training set, 20% as the development set, and 20% as the test set.",
"5 We compare our models against five baselines: most frequent baseline , random baseline , ROUGE 5 We choose the best model based on the development set, retrain the selected model on the combination of the training and the development data, and evaluate the model on the test set.",
"baseline , 6 BLEU baseline , and ROUGE + BLEU baseline .",
"We use accuracy to evaluate the models on the simulated learner data, and on the real learner data, we report scores of two evaluation metrics: Pearson correlation coefficient (PCC) and Root Mean Square Error (RMSE), which are commonly used for evaluating regression models.",
"Table 2 shows the results of the baseline and the four types of models on the simulated learner data, and Table 3 reports the results of the models on the real learner data.",
"On the simulated learner data, the best variants from all three methods outperform the baselines.",
"The improvement is statistically signifi-cant ( p< 0 . 05 ) using t -test for all three methods.",
"We combine the best variants from the three approaches into a single system by taking the majority vote from the models.",
"The resulting system achieves the best accuracy of 75 .",
"3% in predicting the binary type of the summary on the simulated learner data.",
"On the real learner data, we found that the feature extraction-based model outperforms the 6 A baseline trained on ROUGE features only.",
"CNN model and LSTM-based models, which also significantly outperform the baselines.",
"The results suggest that the neural network-based models are not as effective as the traditional feature extraction-based method for the regression task, at least when the training data is limited in size.",
"However, although the CNN and LSTM models are not the best-performing models individually, a combination of the three methods (KRR, CNN and LSTM) still improves the performance.",
"We believe that this is because the three independent models capture different aspects of the summary quality that are complementary to each other.",
"In addition, the combined model is more robust to outliers.",
"For example, when two models agree on an instance while the third model does not, the combined model will select the majority vote or the average score of the model estimations, hence achieving a better performance in estimating the summary quality.",
"Overall, the best model performance is close to human performance.",
"We also observe that when assessing the summaries with extracted features, using the original document as the reference works better than using other types of reference.",
"This might be because the extractive summarizers only select sentences that are highly related to others, where the relation Models Variants PCC RMSE Baseline Baseline type most-frequent -1.30 random 0.011 1.79 ROUGE 0.499 1.12 BLEU 0.208 2.88 ROUGE + BLEU 0.499 1.11 KRR reference type random 0.517 1.11 TextRank 0.576 1.06 MEAD 0.557 1.08 original text 0.636 0.99 CNN similarity matrix type avg word embeddings 0.504 1.12 skip-thought vectors 0.458 1.14 LSTM Merged LSTM merging concatenation 0.487 1.13 addition 0.466 1.13 function multiplication 0.490 1.12 linear combination 0.484 1.13 Attention LSTM 0.494 1.12 Combined model KRR+CNN+LSTM 0.665* 0.97* Table 3: Results of the regression model performance on the learner data.",
"is typically judged by the word overlap, therefore missing the bits of text where topical words occur less often.",
"In this paper, we introduce a summarization task for testing reading comprehension of learners and present several automated systems to assess the quality of the learner summary.",
"We collected summaries from members of our university and from the real learners to evaluate our systems.",
"We propose and compare three approaches to assess the summaries, including the feature extraction-based model, the CNN-based model using similarity matrix, and the LSTM-based model.",
"The best system, built using a combination of three models, yields an accuracy of 75 .",
"3% on the simulated learner data, and P CC = 0 .",
"665 , RMSE = 0 .",
"97 on the real learner data.",
"Although not directly comparable to other studies, we note that these results are higher than those reported in previous work.",
"Our systems are generalizable and address the limitations of the previous research in this area as: (1) they are capable of evaluating the quality of a summary without the need of being trained on each input text separately, (2) they do not require a pre-defined reference, and (3) they are capable of capturing content similarity beyond verbatim overlap, taking into account paraphrasing and higher levels of abstraction.",
"We believe that although the application presented in this paper focuses on assessing learner summaries, these techniques may also be useful for benchmarking automated summarization systems.",
"Evaluation of these techniques for benchmarking automated summarization systems is one direction for our future research.",
"We make the summary data available at https://www.cl.cam.ac.uk/ek358/learner-summaries.html .",
"This paper reports on research supported by Cambridge Assessment, University of Cambridge.",
"We also thank Cambridge Assessment for their assistance in the collection of the real learner data.",
"We are grateful to the anonymous reviewers for their valuable feedback."
] | [
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"method",
"method",
"method",
"result",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Extracting temporal relations ( before, after, overlapping , etc.) is a key aspect of understanding events described in natural language.",
"We argue that this task would gain from the availability of a resource that provides prior knowledge in the form of the temporal order that events usually follow.",
"This paper develops such a resource a probabilistic knowledge base acquired in the news domain by extracting temporal relations between events from the New York Times (NYT) articles over a 20-year span (19872007).",
"We show that existing temporal extraction systems can be improved via this resource.",
"As a byproduct, we also show that interesting statistics can be retrieved from this resource, which can potentially benefit other time-aware tasks.",
"The proposed system and resource are both publicly available 1 .",
"Time is an important dimension of knowledge representation.",
"In natural language, temporal information is often expressed as relations between events.",
"Reasoning over these relations can help figuring out when things happened, estimating how long things take, and summarizing the timeline of a series of events.",
"Several recent SemEval workshops are a good showcase of the importance of this topic (Verhagen et al., 2007, 2010; UzZaman et al., 2013; Llorens et al., 2015; Minard et al., 2015; Bethard et al., 2015, 2016, 2017).",
"One of the challenges in temporal relation extraction is that it requires high-level prior knowledge of the temporal order that events usually follow.",
"In Example 1, we have deleted events from several snippets from CNN, so that we cannot use our prior knowledge of those events.",
"We are also 1 http://cogcomp.org/page/publication_ view/830 told that e1 and e2 have the same tense, and e3 and e4 have the same tense, so we cannot resort to their tenses to tell which one happens earlier.",
"As a result, it is very difficult even for humans to figure out the temporal relations (referred to as TempRels hereafter) between those events.",
"This is because rich temporal information is encoded in the events' names, and this often plays an indispensable role in making our decisions.",
"In the first paragraph of Example 1, it is difficult to understand what really happened without the actual event verbs; let alone the TempRels between them.",
"In the second paragraph, things are even more interesting: if we had e3:dislike and e4:stop , then we would know easily that I dislike occurs after they stop the column.",
"However, if we had e3:ask and e4:help , then the relation between e3 and e4 is now reversed and e3 is before e4 .",
"We are in need of the event names to determine the TempRels; however, we do not have them in Example 1. In Example 2, where we show the complete sentences, the task has become much easier for humans due to our prior knowledge, namely, that explosion usually leads to casualties and that people usually ask before they get help.",
"Motivated by these examples (which are in fact very common), we believe in the importance of such a prior knowledge in determining TempRels between events.",
"Example 1: Difficulty in understanding TempRels when event content is missing.",
"Note that e1 and e2 have the same tense, and e3 and e4 have the same tense.",
"More than 10 people have ( e1:died ), police said.",
"A car ( e2:exploded ) on Friday in the middle of a group of men playing volleyball.",
"The first thing I ( e3:ask ) is that they ( e4:help ) writing this column.",
"However, most existing systems only make use of rather local features of these events, which cannot represent the prior knowledge humans have 841 Example Pairs Before (%) After (%) accept determine 42 26 ask help 86 9 attend schedule 1 82 accept propose 10 77 die explode 14 83 ...",
"about these events and their typical order.",
"As a result, existing systems almost always attempt to solve the situations shown in Example 1, even when they are actually presented with input as in Example 2. The first contribution of this work is thus the construction of such a resource in the form of a probabilistic knowledge base, constructed from a large New York Times (NYT) corpus.",
"We hereafter name our resource TEMporal relation PRObabilistic knowledge Base (TEMPROB ), which can potentially benefit many time-aware tasks.",
"A few example entries of TEMPROB are shown in Table 1. Second , we show that existing TempRel extraction systems can be improved using TEMPROB , either in a local method or in a global method (explained later), by a significant margin in performance on the benchmark TimeBank-Dense dataset (Cassidy et al., 2014).",
"Example 2: The original sentences in Example 1. More than 10 people have ( e1:died ), police said.",
"A car ( e2:exploded ) on Friday in the middle of a group of men playing volleyball.",
"The first thing I ( e3:ask ) is that they ( e4:help ) writing this column.",
"The rest of the paper is organized as follows.",
"Section 2 provides a literature review of TempRels extraction in NLP.",
"Section 3 describes in detail the construction of TEMPROB .",
"In Sec. 4, we show that TEMPROB can be used in existing TempRels extraction systems and lead to significant improvement.",
"Finally, we conclude in Sec. 5. 2 Related Work The TempRels between events can be represented by an edge-labeled graph, where the nodes are events, and the edges are labeled with TempRels (Chambers and Jurafsky, 2008; Do et al., 2012; Ning et al., 2017).",
"Given all the nodes, we work on the TempRel extraction task, which is to assign labels to the edges in a temporal graph (a vague or none label is often included to account for the non-existence of an edge).",
"Early work includes Mani et al. (2006); Chambers et al. (2007); Bethard et al. (2007); Verhagen and Pustejovsky (2008), where the problem was formulated as learning a classification model for determining the label of every edge locally without referring to other edges (i.e., local methods).",
"The predicted temporal graphs by these methods may violate the transitive properties that a temporal graph should possess.",
"For example, given three nodes, e1 , e2 , and e3 , a local method can possibly classify ( e1 , e2 )= before , ( e2 , e3 )= before , and ( e1 , e3 )= after , which is obviously wrong since before is a transitive relation and ( e1 , e2 )= before and ( e2 , e3 )= before dictate that ( e1 , e3 )= before .",
"Recent state-of-the-art methods, (Chambers et al., 2014; Mirza and Tonelli, 2016), circumvented this issue by growing the predicted temporal graph in a multi-step manner, where transitive graph closure is performed on the graph every time a new edge is labeled.",
"This is conceptually solving the structured prediction problem greedily.",
"Another fam-ily of methods resorted to Integer Linear Programming (ILP) (Roth and Yih, 2004) to get exact inference to this problem (i.e., global methods), where the entire graph is solved simultaneously and the transitive properties are enforced naturally via ILP constraints (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012).",
"A most recent work brought this idea even further, by incorporating structural constraints into the learning phase as well (Ning et al., 2017).",
"The TempRel extraction task has a strong dependency on prior knowledge, as shown in our earlier examples.",
"However, very limited attention has been paid to generating such a resource and to make use of it; to our knowledge, the TEMPROB proposed in this work is completely new.",
"We find that the time-sensitive relations proposed in Jiang et al. (2016) is a close one in literature (although it is still very different).",
"Jiang et al. (2016) worked on the knowledge graph completion task.",
"Based on YAGO2 (Hoffart et al., 2013) and Freebase (Bollacker et al., 2008), it manually selects a small number of relations that are time-sensitive (10 relations from YAGO2 and 87 relations from Freebase, respectively).",
"Exemplar relations are wasBornIn diedIn and graduate-From workAt , where means temporally be-842 fore.",
"Our work significantly differs from the time-sensitive relations in Jiang et al. (2016) in the following aspects.",
"First, scale difference: Jiang et al. (2016) can only extract a small number of relations ( < 100), but we work on general semantic frames (tens of thousands) and the relations between any two of them, which we think has broader applications.",
"Second, granularity difference: the smallest granularity in Jiang et al. (2016) is one year 2 , i.e., only when two events happened in different years can they know the temporal order of them, but we can handle implicit temporal orders without having to refer to the physical time points of events (i.e., the granularity can be arbitrarily small).",
"Third, domain difference: while Jiang et al. (2016) extracts time-sensitive relations from structured knowledge bases (where events are explicitly anchored to a time point), we extract relations from unstructured natural language text (where the physical time points may not even exist in text).",
"Our task is more general and it allows us to extract much more relations, as reflected by the 1st difference above.",
"Another related work is the VerbOcean (Chklovski and Pantel, 2004), which extracts temporal relations between pairs of verbs using manually designed lexico-syntactic patterns (there are in total 12 such patterns), in contrast to the automatic extraction method proposed in this work.",
"In addition, the only termporal relation considered in VerbOceans is before , while we also consider relations such as after , includes , included , equal , and vague .",
"As expected, the total numbers of verbs and before relations in VerbOcean is about 3K and 4K, respectively, both of which are much smaller than TEMPROB , which contains 51K verb frames (i.e., disambiguated verbs), 9.2M ( verb 1 , verb 2 , relation ) entries, and up to 80M temporal relations altogether.",
"All these differences necessitate the construction of a new resource for TempRel extraction, which we explain below.",
"to extract events (Sec. 3.1) and extract TempRels (Sec. 3.2), from a large, unannotated 3 corpus (Sec. 3.3).",
"We also show some interesting statistics discovered in TEMPROB that may benefit other tasks (Sec. 3.4).",
"In the next, we describe each of these elements.",
"Extracting events and the relations between them (e.g., coreference, causality, entailment, and temporal) have long been an active area in the NLP community.",
"Generally speaking, an event is considered to be an action associated with corresponding participants involved in this action.",
"In this work, following (Peng and Roth, 2016; Peng et al., 2016; Spiliopoulou et al., 2017) we consider semantic-frame based events, which can be directly detected via off-the-shelf semantic role labeling (SRL) tools.",
"This aligns well with previous works on event detection ( Hovy et al., 2013; Peng et al., 2016).",
"Depending on the events of interest, the SRL results are often a superset of events and need to be filtered afterwards (Spiliopoulou et al., 2017).",
"For example, in ERE (Song et al., 2015) and Event Nugget Detection (Mitamura et al., 2015), events are limited to a set of predefined types (such as Business, Conflict, and Justice); in the context of TempRels, existing datasets have focused more on predicate verbs rather than nominals 4 (Pustejovsky et al., 2003; Graff, 2002; UzZaman et al., 2013).",
"Therefore, we only look at verb semantic frames in this work due to the difficulty of getting TempRel annotation for nominal events, and we will use verb (semantic frames) interchangeably with events hereafter in this paper.",
"Given the events extracted in a given article (i.e., given the nodes in a graph), we next explain how the TempRels are extracted (that is, the edge labels in the graph).",
"We adopt the commonly used feature set in TempRel extraction (Do et al., 2012; Ning et al., 2017) and here we simply list them for reproducibility.",
"For each pair of nodes, the follow-3 Unannotated with TempRels.",
"4 Some nominal events were indeed annotated in TimeBank (Pustejovsky et al., 2003), but their annotation did not align well with modern nominal-SRL methods.",
"ing features are extracted.",
"(i) The part-of-speech (POS) tags from each individual verb and from its neighboring three words.",
"(ii) The distance between them in terms of the number of tokens.",
"(iii) The modal verbs between the event mention (i.e., will, would, can, could, may and might ).",
"(iv) The temporal connectives between the event mentions (e.g., before, after and since ).",
"(v) Whether the two verbs have a common synonym from their synsets in WordNet (Fellbaum, 1998).",
"(vi) Whether the input event mentions have a common derivational form derived from WordNet.",
"(vii) The head word of the preposition phrase that covers each verb, respectively.",
"With the features defined above, we need to train a system that can annotate the TempRels in each document.",
"The TimeBank-Dense dataset (TB-Dense) ( Cassidy et al., 2014) is known to have the best quality in terms of its high density of TempRels and is a benchmark dataset for the TempRel extraction task.",
"It contains 36 documents from TimeBank (Pustejovsky et al., 2003) which were re-annotated using the dense event ordering framework proposed in (Cassidy et al., 2014).",
"We follow its label set (denoted by R ) of before , after , includes , included , equal , and vague in this study.",
"Due to the slight event annotation difference in TBDense, we collect our training data as follows.",
"We first extract all the verb semantic frames from the raw text of TBDense.",
"Then we only keep those semantic frames that are matched to an event in TBDense (about 85% semantic frames are kept in this stage).",
"By doing so, we can simply use the TempRel annotations provided in TBDense.",
"Hereafter the TBDense dataset used in this paper refers to this version unless otherwise specified.",
"We group the TempRels by the sentence distance of the two events of each relation 5 .",
"Then we use the averaged perceptron algorithm (Freund and Schapire, 1998) implemented in the Illinois LBJava package (Rizzolo and Roth, 2010) to learn from the training data described above.",
"Since only relations that have sentence distance 0 or 1 are annotated in TBDense, we will have two classifiers, one for same sentence relations, and one for neighboring sentence relations, respectively.",
"Note that TBDense was originally split into Train (22 docs), Dev (5 docs), and Test (9 docs).",
"In all subsequent analysis, we combined Train and Dev and we performed 3-fold cross validation on the 27 documents (in total about 10K relations) to tune the parameters in any classifier.",
"When generating TEMPROB , we need to process a large number of articles, so we adopt the greedy inference strategy described earlier due to its computational efficiency (Chambers et al., 2014; Mirza and Tonelli, 2016).",
"Specifically, we apply the same-sentence relation classifier before the neighboring-sentence relation classifier; whenever a new relation is added in this article, a transitive graph closure is performed immediately.",
"By doing this, if an edge is already labeled during the closure phase, it will not be labeled again, so con-flicts are avoided.",
"As mentioned earlier, the source corpus on which we are going to construct TEMPROB is comprised of NYT articles from 20 years (1987-2007) 6 .",
"It contains more than 1 million documents and we extract events and corresponding features from each document using the Illinois Curator package (Clarke et al., 2012) on Amazon Web Services (AWS) Cloud.",
"In total, we discovered 51K unique verb semantic frames and 80M relations among them in the NYT corpus (15K of the verb frames had more than 20 relations extracted and 9K had more than 100 relations).",
"We first describe the notations that we are going to use.",
"We denote the set of all verb semantic frames by V .",
"Let D i , i = 1 , . . . , N be the i -th document in our corpus, where N is the total number of documents.",
"Let G i = ( V i , E i ) be the temporal graph inferred from D i using the approach described above, where V i V is the set of verbs/events extracted in D i and E i = { ( v m , v n , r mn ) } m<n V i V i R is the edge set of D i , which is composed of TempRel triplets; specifically, a TempRel triplet ( v m , v n , r mn ) E i represents that in document D i , the TempRel between v m and v n is r mn .",
"Due to the symmetry in TempRels, we only keep the triplets with m < n in E i .",
"Assuming that the 6 https://catalog.ldc.upenn.edu/LDC2008T19 844 verbs in V i are ordered by their appearance order in text, then m < n means that in the i -th document, v m appears earlier in text than v n does.",
"Given the usual confusion between that one event is temporally before another and that one event is physically appearing before another in text, we will refer to temporally before as T-Before and physically before as P-Before .",
"Using this language, for example, E i only keeps the triplets that v m is P-Before v n in D i .",
"We first show extreme cases that some events are almost always labeled as T-Before or T-After in the corpus.",
"Specifically, for each pair of verbs v i , v j V , we define the following ratios: b = C ( v i , v j , before ) C ( v i , v j , before ) + C ( v i , v j , after ) , a = 1 b , (1) where C ( v i , v j , r ) is the count of v i P-Before v j with TempRel r R : C ( v i , v j , r ) = N i =1 ( v m ,v n ,r mn ) E i I { v m = v i & v n = v j & r mn = r } , (2) where I {} is the indicator function.",
"Add-one smoothing technique from language modeling is used to avoid divided-by-zero errors.",
"In Table 2, we show some event pairs with either b > 0 .",
"9 (upper part) or a > 0 .",
"9 (lower part).",
"We think the examples from Table 2 are intuitively appealing: chop happens before taste , clean happens after contaminate , etc.",
"More interestingly, in the lower part of the table, we show pairs in which the physical order is different from the temporal order: for example, when achieve is P-Before desire , it is still labeled as T-After in most cases (104 out of 111 times), which is correct intuitively.",
"In practice, e.g., in the TBDense dataset (Cassidy et al., 2014), roughly 30%-40% of the P-Before pairs are T-After.",
"Therefore, it is important to be able to capture their temporal order rather than simply taking their physical order if one wants to understand the temporal implication of verbs.",
"For each verb v , we define the marginal count of v being P-Before to arbitrary verbs with TempRel r R as C ( v, r ) = v i VC ( v, v i , r ) .",
"Then for every other verb v , we define P ( v T-Before v | v T-Before ) C ( v, v , before ) C ( v, before ) , (3) Example Pairs #T-Before #T-After chop.01 taste.01 133 8 concern.01 protect.01 110 10 conspire.01 kill.01 113 6 debate.01 vote.01 48 5 dedicate.01 promote.02 67 7 fight.01 overthrow.01 98 8 achieve.01 desire.01 7 104 admire.01 respect.01 7 121 clean.02 contaminate.01 3 82 defend.01 accuse.01 13 160 die.01 crash.01 8 223 overthrow.01 elect.01 3 100 Table 2: Several extreme cases from TEMPROB , where some event is almost always labeled to be T-Before or T-After throughout the NYT corpus.",
"For a specific verb, e.g., v = investigate , each verb v V is sorted by the two conditional probabilities above.",
"Then the most probable verbs that temporally precede or follow v are shown in Fig. 1, where the y-axes are the corresponding conditional probabilities.",
"We can see reasonable event sequences like { involve , kill , suspect , steal } investigate { report , prosecute , pay , punish } , which indicates the possibility of using TEMPROB for event sequence predictions or story cloze tasks.",
"There are also suspicious pairs like know in the T-Before list of investigate (Fig. 1a), report in the T-Before list of bomb (Fig. 1b), and play in the T-After list of mourn (Fig. 1c).",
"Since the arguments of these verb frames are not considered here, whether these few seemingly counter-intuitive pairs come from system error or from a special context needs further investigation.",
"In the above, we have explained the construction of TEMPROB and shown some interesting examples from it, which were meant to visualize its correctness.",
"In this section, we first quantify the correctness of the prior obtained in TEMPROB , and 845",
"In Table 2, we showed examples with either b or a > 0 .",
"9 .",
"We argued that they seem correct.",
"Here we quantify the correctness of b and a based on TBDense.",
"Specifically, we collected all the gold T-Before and T-After pairs.",
"Let [0 .",
"5 , 1) be a constant threshold.",
"Imagine a naive predictor such that for each pair of events v i and v j , if b > , it predicts that v i is T-Before v j ; if a > , it predicts that v i is T-After v j ; otherwise, it predicts that v i is T-Vague to v j .",
"We expect that a higher b (or a ) represents a higher confidence for an instance to be labeled T-Before (or T-After).",
"Table 3 shows the performance of this predictor, which meets our expectation and thus justifies the validity of TEMPROB .",
"As we gradually increase the value of in Table 3, the precision increases in roughly the same pace with , which indicates that the values of b and a 7 from TEMPROB indeed represent the confidence level.",
"The decrease in recall is also expected because more examples are labeled as T-Vague when is larger.",
"7 Recall the definitions of b and a in Eq.",
"(1).",
"another dataset that is not in the TempRel domain.",
"Instead, we downloaded the EventCausality dataset 8 (Do et al., 2011).",
"For each causally related pair e1 and e2 , if EventCausality annotates that e1 causes e2 , we changed it to be T-Before; if EventCausality annotates that e1 is caused by e2 , we changed it to be T-after.",
"Therefore, based on the assumption that the cause event is T-Before the result event, we converted the EventCausality dataset to be a TempRel dataset and it thus could also be used to evaluate the quality of TEMPROB .",
"We adopted the same predictor used in Table 3 8 http://cogcomp.org/page/resource_ view/27 846 with = 0 .",
"5 and in Table 4, we compared it with two baselines:",
"(i) always predicting T-Before and",
"(ii) always predicting T-After.",
"First, the accuracy (66.2%) in Table 4 is rather consistent with its counterpart in Table 3, confirming the stability of statistics from TEMPROB .",
"Second, by directly using the prior statistics b and a from TEMPROB , we can improve the precision of both labels with a significant margin relative to the two baselines (17.0% for T-Before and 15.9% for T-After).",
"Overall, the accuracy was improved by 11.5%.",
"The original purpose of TEMPROB was to improve TempRel extraction.",
"We show it from two perspectives: How effective the prior distributions obtained from TEMPROB are",
"(i) as features in local methods and",
"(ii) as regularization terms in global methods.",
"The results below were evaluated on the test split of TB-Dense (Cassidy et al., 2014).",
"We first test how well the prior distributions from TEMPROB can be used as features in improving local methods for TempRel extraction.",
"In Table 5, we used the original feature set proposed in Sec. 3.2.1 as the baseline, and added the prior distribution obtained from TEMPROB on top of it.",
"Specifically, we added b (see Eq.",
"(1)) and { f r } r R , respectively, where { f r } r R is the prior distributions of all labels, i.e., f r ( v i , v j ) = C ( v i , v j , r ) r RC ( v i , v j , r ) , r R. (5) Recall function C is defined in Eq.",
"(2).",
"All comparisons were decomposed to same sentence relations (Dist=0) and neighboring sentence relations (Dist=1) for a better understanding of the behavior.",
"All classifiers were trained using the averaged perceptron algorithm (Freund and Schapire, 1998) and tuned by 3-fold cross validation.",
"From Table 5, we can see that simply adding b into the feature set could improve the original system F 1 by 1.8% (Dist=0) and 3.0% (Dist=1).",
"If we further add as features the full set of prior distributions { f r } r R , the improvement comes to 2.7% and 6.5%, respectively.",
"Noticing that the feature is more helpful for Dist=1, we think that it is because distant pairs usually have less lexical dependency and thus need more prior information provided by our new feature.",
"With Dist=0 and Dist=1 combined (numbers not shown in the Table), the 3rd line improved the original by 4.7% in F 1 and by 5.1% in the temporal awareness F-score (another metric used in the TempEval3 workshop).",
"As mentioned earlier in Sec. 2, many systems adopt a global inference method via integer linear programming (ILP) (Roth and Yih, 2004) to enforce transitivity constraints over an entire temporal graph (Bramsen et al., 2006; Chambers and Jurafsky, 2008; Denis and Muller, 2011; Do et al., 2012; Ning et al., 2017).",
"In addition to the usage shown in Sec. 4.2.1, the prior distributions from TEMPROB can also be used to regularize the conventional ILP formulation.",
"Specifically, in each document, let I r ( ij ) { 0 , 1 } be the indicator function of relation r for event i and event j ; let x r ( ij ) [0 , 1] be the corresponding soft-max score obtained from the local classifiers (de-pending on the sentence distance between i and j ).",
"Then the ILP objective for global inference is 847 formulated as follows.",
"I = argmax I ij E r R ( x r ( ij ) + f r ( ij )) I r ( ij ) (6) s.t. r I r ( ij ) = 1 (uniqueness) , I r ( ij ) = I r ( ji ) , (symmetry) I r 1 ( ij ) + I r 2 ( jk ) Mm =1 I r m 3 ( ik ) 1 , (transitivity) for all distinct events i , j , and k , where E = { ij | sentence dist ( i, j ) 1 } , adjusts the regularization term and was heuristically set to 0.5 in this work, r is the reverse relation of r , and M is the number of possible relations for r 3 when r 1 and r 2 are true.",
"Note our difference from the ILP in (Ning et al., 2017) is the underlined regularization term f r ( ij ) (which itself is defined in Eq.",
"(5)) obtained from TEMPROB .",
"We present our results on the test split of TBDense in Table 6, which is an ablation study showing step-by-step improvements in two metrics.",
"In addition to the straightforward precision, recall, and F 1 metric, we also compared the F 1 of the temporal awareness metric used in TempEval3 (UzZa-man et al., 2013).",
"The awareness metric performs graph reduction and closure before evaluation so as to better capture how useful a temporal graph is.",
"Details of this metric can be found in UzZaman and Allen (2011); UzZaman et al. (2013); Ning et al. (2017).",
"In Table 6, the baseline used the original feature set proposed in Sec. 3.2.1 and applied global ILP inference with transitivity constraints.",
"Technically, it is to solve Eq.",
"(6) with = 0 (i.e., unregularized) on top of the original system in Table 5. Apart from some implementation details, this baseline is also the same as many existing global methods as Chambers and Jurafsky (2008); Do et al. (2012).",
"System 2, +Feature: { f r } r R , Label P R F 1 before +0.3 +15 +6 after +4 +4 +4 equal +11 0 +2 includes +17 0 +0.2 included +8 0 +2 vague +3 -4 -1 Table 7: Label-wise performance improvement of System 3 over System 1 in Table 6. We can see that incorporating TEMPROB improves the recall of before and after , and improves the precision of all labels, with a slight drop in the recall of vague .",
"is to add prior distributions as features when training the local classifiers.",
"Technically, the scores x r ( ij ) 's in Eq.",
"(6) used by baseline were changed.",
"We know from Table 5 that adding { f r } r R made the local decisions better.",
"Here the performance of System 2 shows that this was also the case for the global decisions made via ILP: both precision and recall got improved, and F 1 and awareness were both improved by a large margin, with 5.1% in F 1 and 6.6% in awareness F 1 .",
"On top of this, System 3 sets = 0 .",
"5 in Eq.",
"(6) to add regularizations to the conventional ILP formulation.",
"The sum of these regularization terms represents a confidence score of how coherent the predicted temporal graph is to our TEMPROB , which we also want to maximize.",
"Even though a considerable amount of information from TEMPROB had already been encoded as features (as shown by the large improvements by System 2), these regularizations were still able to further improve the precision, recall and awareness scores.",
"To sum up, the total improvement over the baseline system brought by TEMPROB is 5.9% in F 1 and 7.1% in awareness F 1 , both with a notable margin.",
"Table 7 furthermore decomposes this improvement into each TempRel label.",
"To compare with state-of-the-art systems, which all used gold event properties (i.e., Tense, Aspect, Modality, and Polarity), we retrained System 3 in Table 6 with these gold properties and show the results in Table 8.",
"We reproduced the results of CAEVO 9 (Chambers et al., 2014) and Ning et al. (2017) 10 and evaluated them on the partial TBDense test split 11 .",
"Under both metrics, the 9 https://github.com/nchambers/caevo 10 http://cogcomp.org/page/publication_ view/822 11 There are 731 relations in the partial TBDense test split (201 before , 138 after , 39 includes , 31 included , 14 equal , and 308 vague ).",
"proposed system achieved the best performance.",
"An interesting fact is that even without these gold properties, our System 3 in Table 6 was already better than CAEVO (on Line 1) and Ning et al. (2017) (on Line 2) in both metrics.",
"This is appealing because in practice, those gold properties may not exist, but our proposed system can still generate state-of-the-art performance without them.",
"For readers who are interested in the complete TBDense dataset, we also performed a naive augmentation as follows.",
"Recall that System 3 only makes predictions to a subset of the complete TBDense dataset.",
"We kept this subset of predictions, and filled the missing predictions by Ning et al. (2017).",
"Performances of this naively augmented proposed system is compared with CAEVO and Ning et al. (2017) on the complete TBDense dataset.",
"We can see that by replacing with predictions from our proposed system, Ning et al. (2017) got a better precision, recall, F 1 , and awareness F 1 , which is the new state-of-the-art on all reported performances on this dataset.",
"Note that the awareness F 1 scores on Lines 4-5 are consistent with reported values in Ning et al. (2017).",
"To our knowledge, the results in Table 8 is the first in literature that reports performances in both metrics, and it is promising to see that the proposed method outperformed state-of-the-art methods in both metrics.",
"Temporal relation (TempRel) extraction is an important and challenging task in NLP, partly due to its strong dependence on prior knowledge.",
"Motivated by practical examples, this paper argues that a resource of the temporal order that events usually follow is helpful.",
"To construct such a resource, we automatically processed a large corpus from NYT with more than 1 million documents using an existing TempRel extraction system and obtained the TEMporal relation PRObabilistic knowledge Base (TEMPROB ).",
"The TEMPROB is a good showcase of the capability of such prior knowledge, and it has shown its power in improving existing TempRel extraction systems on a benchmark dataset, TBDense.",
"The resource and the system reported in this paper are both publicly available 12 and we hope that it can foster more investigations into time-related tasks.",
"We thank all the reviewers for providing useful comments.",
"This research is supported in part by a grant from the Allen Institute for Artificial Intelligence (allenai.org); the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) a research collaboration as part of the IBM AI Horizons Network; by DARPA under agreement number FA8750-13-2-0008; and by the Army Research Laboratory (ARL) under agreement W911NF-09-2-0053.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of DARPA or the U.S. Government.",
"Any opinions, findings, conclusions or recommendations are those of the authors and do not necessarily reflect the view of the ARL."
] | [
"abstain",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"Word embedding is a key component in many downstream applications in processing natural languages.",
"Existing approaches often assume the existence of a large collection of text for learning effective word embedding.",
"However, such a corpus may not be available for some low-resource languages.",
"In this paper, we study how to effectively learn a word embedding model on a corpus with only a few million tokens.",
"In such a situation, the co-occurrence matrix is sparse as the co-occurrences of many word pairs are unobserved.",
"In contrast to existing approaches often only sample a few unobserved word pairs as negative samples, we argue that the zero entries in the co-occurrence matrix also provide valuable information.",
"We then design a Positive-Unlabeled Learning (PU-Learning) approach to factorize the co-occurrence matrix and validate the proposed approaches in four different languages.",
"Learning word representations has become a fundamental problem in processing natural languages.",
"These semantic representations, which map a word into a point in a linear space, have been widely applied in downstream applications, including named entity recognition (Guo et al., 2014), document ranking (Nalisnick et al., 2016), sentiment analysis (Irsoy and Cardie, 2014), question answering (Antol et al., 2015), and image captioning (Karpathy and Fei-Fei, 2015).",
"Over the past few years, various approaches have been proposed to learn word vectors (e.g., (Pennington et al., 2014; Mikolov et al., 2013a; Levy and Goldberg, 2014b; Ji et al., 2015)) based on co-occurrence information between words observed on the training corpus.",
"The intuition behind this is to represent words with similar vectors if they have similar contexts.",
"To learn a good word embedding, most approaches assume a large collection of text is freely available, such that the estimation of word co-occurrences is accurate.",
"For example, the Google Word2Vec model (Mikolov et al., 2013a) is trained on the Google News dataset, which contains around 100 billion tokens, and the GloVe embedding (Pennington et al., 2014) is trained on a crawled corpus that contains 840 billion tokens in total.",
"However, such an as-sumption may not hold for low-resource languages such as Inuit or Sindhi, which are not spoken by many people or have not been put into a digital format.",
"For those languages, usually, only a limited size corpus is available.",
"Training word vectors under such a setting is a challenging problem.",
"One key restriction of the existing approaches is that they often mainly rely on the word pairs that are observed to co-occur on the training data.",
"When the size of the text corpus is small, most word pairs are unobserved, resulting in an extremely sparse co-occurrence matrix (i.e., most entries are zero) 1 .",
"For example, the text8 2 corpus has about 17,000,000 tokens and 71,000 distinct words.",
"The corresponding co-occurrence matrix has more than five billion entries, but only about 45,000,000 are non-zeros (observed on the training corpus).",
"Most existing approaches, such as Glove and Skip-gram, cannot handle a vast number of zero terms in the co-occurrence matrix; therefore, they only sub-sample a small subset of zero entries during the training.",
"In contrast, we argue that the unobserved word pairs can provide valuable information for training a word embedding model, especially when the co-occurrence matrix is very sparse.",
"Inspired 1 Note that the zero term can mean either the pairs of words cannot co-occur or the co-occurrence is not observed in the training corpus.",
"by the success of Positive-Unlabeled Learning (PU-Learning) in collaborative filtering applications (Pan et al., 2008; Hu et al., 2008; Pan and Scholz, 2009; Qin et al., 2010; Paquet and Koenig-stein, 2013; Hsieh et al., 2015), we design an algorithm to effectively learn word embeddings from both positive (observed terms) and unlabeled (un-observed/zero terms) examples.",
"Essentially, by using the square loss to model the unobserved terms and designing an efficient update rule based on linear algebra operations, the proposed PU-Learning framework can be trained efficiently and effectively.",
"We evaluate the performance of the proposed approach in English 3 and other three resource-scarce languages.",
"We collected unlabeled language corpora from Wikipedia and compared the proposed approach with popular approaches, the Glove and the Skip-gram models, for training word embeddings.",
"The experimental results show that our approach significantly outperforms the baseline models, especially when the size of the training corpus is small.",
"Our key contributions are summarized below.",
"We propose a PU-Learning framework for learning word embedding.",
"We tailor the coordinate descent algorithm (Yu et al., 2017b) for solving the corresponding optimization problem.",
"Our experimental results show that PU-Learning improves the word embedding training in the low-resource setting.",
"Learning word vectors.",
"The idea of learning word representations can be traced back to Latent Semantic Analysis (LSA) (Deerwester et al., 1990) and Hyperspace Analogue to Language (HAL) (Lund and Burgess, 1996), where word vectors are generated by factorizing a word-document and word-word co-occurrence matrix, respectively.",
"Similar approaches can also be extended to learn other types of relations between words (Yih et al., 2012; Chang et al., 2013) or entities (Chang et al., 2014).",
"However, due to the limitation of the use of principal component analysis, 3 Although English is not a resource-scarce language, we simulate the low-resource setting in an English corpus.",
"In this way, we leverage the existing evaluation methods to evaluate the proposed approach.",
"these approaches are often less flexible.",
"Besides, directly factorizing the co-occurrence matrix may cause the frequent words dominating the training objective.",
"In the past decade, various approaches have been proposed to improve the training of word embeddings.",
"For example, instead of factorizing the co-occurrence count matrix, Bullinaria and Levy (2007); Levy and Goldberg (2014b) proposed to factorize point-wise mutual information (PMI) and positive PMI (PPMI) matrices as these metrics scale the co-occurrence counts (Bullinaria and Levy, 2007; Levy and Goldberg, 2014b).",
"Skip-gram model with negative-sampling (SGNS) and Continuous Bag-of-Words models (Mikolov et al., 2013b) were proposed for training word vectors on a large scale without consuming a large amount of memory.",
"GloVe (Pennington et al., 2014) is proposed as an alternative to decompose a weighted log co-occurrence matrix with a bias term added to each word.",
"Very recently, WordRank model (Ji et al., 2015) has been proposed to minimize a ranking loss which naturally fits the tasks requiring ranking based evaluation metrics.",
"Stratos et al. (2015) also proposed CCA (canonical correlation analysis)-based word embedding which shows competitive performance.",
"All these approaches focus on the situations where a large text corpus is available.",
"Positive and Unlabeled (PU) Learning: Positive and Unlabeled (PU) learning (Li and Liu, 2005) is proposed for training a model when the positive instances are partially labeled and the unlabeled instances are mostly negative.",
"Recently, PU learning has been used in many classification and collaborative filtering applications due to the nature of implicit feedback in many recommendation systemsusers usually only provide positive feedback (e.g., purchases, clicks) and it is very hard to collect negative feedback.",
"To resolve this problem, a series of PU matrix completion algorithms have been proposed (Pan et al., 2008; Hu et al., 2008; Pan and Scholz, 2009; Qin et al., 2010; Paquet and Koenigstein, 2013; Hsieh et al., 2015; Yu et al., 2017b).",
"The main idea is to assign a small uniform weight to all the missing or zero entries and factorize the corresponding matrix.",
"Among them, Yu et al. (2017b) proposed an efficient algorithm for matrix factorization with PU-learning, such that the weighted matrix is constructed implicitly.",
"In this paper, we 1025 W , C vocabulary of central and context words m, n vocabulary sizes k dimension of word vectors W, H m k and n k latent matrices C ij weight for the (i, j) entry A ij value of the PPMI matrix Q ij value of the co-occurrence matrix w i , h j i -th row of W and j -th row of H b , b bias term i , j regularization parameters | | the size of a set Set of possible word-context pairs + Set of observed word-context pairs Set of unobserved word-context pairs Table 1 : Notations.",
"design a new approach for training word vectors by leveraging the PU-Learning framework and existing word embedding techniques.",
"To the best of our knowledge, this is the first work to train word embedding models using the PU-learning framework.",
"Similar to GloVe and other word embedding learning algorithms, the proposed approach consists of three steps.",
"The first step is to construct a co-occurrence matrix.",
"Follow the literature (Levy and Goldberg, 2014a), we use the PPMI metric to measure the co-occurrence between words.",
"Then, in the second step, a PU-Learning approach is applied to factorize the co-occurrence matrix and generate word vectors and context vectors.",
"Finally, a post-processing step generates the final embedding vector for each word by combining the word vector and the context vector.",
"Various metrics can be used for estimating the co-occurrence between words in a corpus.",
"PPMI metric stems from point-wise mutual information (PMI) which has been widely used as a measure of word association in NLP for various tasks (Church and Hanks, 1990).",
"In our case, each entry P MI ( w, c ) represents the relevant measure between a word w and a context word c by calculating the ratio between their joint probability (the chance they appear together in a local context window) and their marginal probabilities (the chance they appear independently) (Levy and Goldberg, 2014b).",
"More specifically, each entry of PMI matrix can be defined by P MI ( w, c ) = log P ( w, c ) P ( w ) P ( c ) , (1) where P ( w ) , P ( c ) and P ( w, c ) are the the frequency of word w , word c , and word pairs ( w, c ) , respectively.",
"The PMI matrix can be computed based on the co-occurrence counts of word pairs, and it is an information-theoretic association measure which effectively eliminates the big differences in magnitude among entries in the co-occurrence matrix.",
"The intuition behind this is that people usually perceive positive associations between words (e.g. ice and snow).",
"In contrast, the negative association is hard to define (Levy and Goldberg, 2014b).",
"Therefore, it is reasonable to replace the negative entries in the PMI matrix by 0, such that the negative association is treated as uninforma-tive.",
"Empirically, several existing works (Levy et al., 2015; Bullinaria and Levy, 2007) showed that the PPMI metric achieves good performance on various semantic similarity tasks.",
"In practice, we follow the pipeline described in Levy et al. (2015) to build the PPMI matrix and apply several useful tricks to improve its quality.",
"First, we apply a context distribution smoothing mechanism to enlarge the probability of sampling a rare context.",
"In particular, all context counts are scaled to the power of .",
"4 : P P MI ( w, c ) = max log P ( w, c ) P ( w ) P ( c ) , 0 !",
"P ( c ) = #( c ) P c #( c ) , where #( w ) denotes the number of times word w appears.",
"This smoothing mechanism effectively 4 Empirically, = 0 .",
"alleviates PPMI's bias towards rare words (Levy et al., 2015).",
"Next, previous studies show that words that occur too frequent often dominate the training objective (Levy et al., 2015) and degrade the performance of word embedding.",
"To avoid this issue, we follow Levy et al. (2015) to sub-sample words with frequency more than a threshold t with a probability p defined as: p = 1 s t P ( w ) .",
"We proposed a matrix factorization based word embedding model which aims to minimize the reconstruction error on the PPMI matrix.",
"The low-rank embeddings are obtained by solving the following optimization problem: min W,H X i,j C ij ( A ij w Ti h j b i b j ) 2 + X i i k w i k 2 + X j j k h j k 2 , (3) where W and H are m k and n k latent matrices, representing words and context words, respectively.",
"The first term in Eq.",
"(3) aims for minimizing reconstruction error, and the second and third terms are regularization terms.",
"i and j are weights of regularization term.",
"They are hyper-parameters that need to be tuned.",
"The zero entries in co-occurrence matrix denote that two words never appear together in the current corpus, which also refers to unobserved terms.",
"The unobserved term can be either real zero (two words shouldn't be co-occurred even when we use very large corpus) or just missing in the small corpus.",
"In contrast to SGNS sub-sampling a small set of zero entries as negative samples, our model will try to use the information from all zeros.",
"Note that we define the positive samples + to be all the ( w, c ) pairs that appear at least one time in the corpus, and negative samples are word pairs that never appear in the corpus.",
"Weighting function.",
"Eq (3) is very similar to the one used in previous matrix factorization approaches such as GloVe, but we propose a new way to set the weights C ij .",
"If we set equal weights for all the entries, then C ij = constant, and the model is very similar to conducting SVD for the PPMI matrix.",
"Previous work has shown that this approach often suffers from poor performance (Pennington et al., 2014).",
"More advanced methods, such as GloVe, set non-uniform weights for observed entries to reflect their confidence.",
"However, the time complexity of their algorithm is proportional to number of nonzero weights ( | ( i, j ) | C ij 6 = 0 | ), thus they have to set zero weights for all the unobserved entries ( C ij = 0 for ), or try to incorporate a small set of unobserved entries by negative sampling.",
"We propose to set the weights for + and differently using the following scheme: C ij = ( Q ij /x max ) , if Q ij x max , and ( i, j ) + 1 , if Q ij > x max , and ( i, j ) + , ( i, j ) (5) Here x max and are re-weighting parameters, and is the unified weight for unobserved terms.",
"We will discuss them later.",
"For entries in + , we set the non-uniform weights as in GloVe (Pennington et al., 2014), which assigns larger weights to context word that appears more often with the given word, but also avoids overwhelming the other terms.",
"For entries in , instead of setting their weights to be 0, we assign a small constant weight .",
"The main idea is from the literature of PU-learning (Hu et al., 2008; Hsieh et al., 2015): although missing entries are highly uncertain, they are still likely to be true 0, so we should incorporate them in the learning process but multiplying with a smaller weight according to the uncertainty.",
"Therefore, in (5) reflects how confident we are to the zero entries.",
"In our experiments, we set x max = 10 , = 3 / 4 according to (Pennington et al., 2014), and let be a parameter to tune.",
"Experiments show that adding weighting function obviously improves the performance especially on analogy tasks.",
"Bias term.",
"Unlike previous work on PU matrix completion (Yu et al., 2017b; Hsieh et al., 2015), we add the bias terms for word and context word 1027 vectors.",
"Instead of directly using w > i h j to approximate A ij , we use A ij w > i h j + b i + b j .",
"Yu et al. (2017b) design an efficient columnwise coordinate descent algorithm for solving the PU matrix factorization problem; however, they do not consider the bias term in their implementations.",
"To incorporate the bias term in (3), we propose the following training algorithm based on the coordinate descent approach.",
"Our algorithm does not introduce much overhead compared to that in (Yu et al., 2017b).",
"X i,j C ij ( A ij w 0> i h 0 j ) 2 .",
"Also, we denote W 0 = [ w 0 1 , w 0 2 , . . . , w 0 n ] > and H 0 = [ h 0 1 , h 0 2 , . . . , h 0 n ] > .",
"In the column-wise coordinate descent method, at each iteration we pick a t { 1 , . . . , ( k +2) } , and update the t -th column of W 0 and H 0 .",
"The updates can be derived for the following two cases:",
"a. When t k , the elements in the t -th column is w 1 t , . . . , w nt and we can directly use the update rule derived in Yu et al. (2017b) to update them.",
"b. When t = k + 1 , we do not update the corresponding column of W 0 since the elements are all 1, and we use the similar coordinate descent update to update the k + 1 -th column of H 0 (corresponding to b 1 , . . . , b n ).",
"When t = k +2 , we do not update the corresponding column of H 0 (they are all 1) and we update the k + 2 -th column of W 0 (corresponding to b 1 , . . . , b n ) using coordinate descent.",
"With some further derivations, we can show that the algorithm only requires O ( nnz ( A ) + nk ) time to update each column, 5 so the overall complexity is O ( nnz ( A ) k + nk 2 ) time per epoch, which is only proportional to number of nonzero terms in A .",
"Therefore, with the same time complexity as GloVe, we can utilize the information from all the zero entries in A instead of only sub-sampling a small set of zero entries.",
"In the PU-Learning formulation, represents the unified weight that assigned to the unobserved terms.",
"Intuitively, reflects the confidence on unobserved entrieslarger means that we are quite certain about the zeroes, while small indicates the many of unobserved pairs are not truly zero.",
"When = 0 , the PU-Learning approach reduces to a model similar to GloVe, which discards all the unobserved terms.",
"In practice, is an important parameter to tune, and we find that = 0 .",
"0625 achieves the best results in general.",
"Regarding the other parameter, is the regularization term for preventing the embedding model from over-fitting.",
"In practice, we found the performance is not very sensitive to as long as it is resonably small.",
"More discussion about the parameter setting can be found in Section 5.",
"Post-processing of Word/Context Vectors The PU-Learning framework factorizes the PPMI matrix and generates two vectors for each word i , w i R k and h i R k .",
"The former represents the word when it is the central word and the latter represents the word when it is in context.",
"Levy et al. (2015) shows that averaging these two vectors ( u avg i = w i + h i ) leads to consistently better performance.",
"The same trick of constructing word vectors is also used in GloVe.",
"Therefore, in the experiments, we evaluate all models with u avg .",
"Our goal in this paper is to train word embedding models for low-resource languages.",
"In this section, we describe the experimental designs to evaluate the proposed PU-learning approach.",
"We first describe the data sets and the evaluation metrics.",
"Then, we provide details of parameter tuning.",
"Table 2 : Performance of the best SGNS, GloVe, PU-Learning models, trained on the text8 corpus.",
"Results show that our proposed model is better than SGNS and GloVe.",
"Star indicates it is significantly better than the second best algorithm in the same column according to Wilcoxon signed-rank test.",
"( p < 0 . 05 )",
"Table 3 : The size of the test sets.",
"The data sets in English are the original test sets.",
"To evaluate other languages, we translate the data sets from English.",
"We consider two widely used tasks for evaluating word embeddings, the word similarity task and the word analogy task.",
"In the word similarity task, each question contains a word pairs and an annotated similarity score.",
"The goal is to predict the similarity score between two words based on the inner product between the corresponding word vectors.",
"The performance is then measured by the Spearmans rank correlation coefficient, which estimates the correlation between the model predictions and human annotations.",
"Following the settings in literature, the experiments are conducted on five data sets, WordSim353 (Finkelstein et al., 2001), WordSim Similarity (Zesch et al., 2008), WordSim Relatedness (Agirre et al., 2009), Mechanical Turk (Radinsky et al., 2011) and MEN (Bruni et al., 2012).",
"In the word analogy task, we aim at solving analogy puzzles like man is to woman as king is to ?, where the expected answer is queen.",
"We consider two approaches for generating answers to the puzzles, namely 3CosAdd and 3CosMul (see (Levy and Goldberg, 2014a) for details).",
"We evaluate the performances on Google analogy dataset (Mikolov et al., 2013a) which contains 8,860 semantic and 10,675 syntactic questions.",
"For the analogy task, only the answer that exactly matches the annotated answer is counted as correct.",
"As a result, the analogy task is more difficult than the similarity task because the evaluation metric is stricter and it requires algorithms to differentiate words with similar meaning and find the right answer.",
"To evaluate the performances of models in the low-resource setting, we train word embedding models on Dutch, Danish, Czech and, English data sets collected from Wikipedia.",
"The original Wikipedia corpora in Dutch, Danish, Czech and English contain 216 million, 47 million, 92 million, and 1.8 billion tokens, respectively.",
"To simulate the low-resource setting, we sub-sample the Wikipedia corpora and create a subset of 64 million tokens for Dutch and Czech and a subset of 32 million tokens for English.",
"We will demonstrate how the size of the corpus affects the performance of embedding models in the experiments.",
"To evaluate the performance of word embeddings in Czech, Danish, and Dutch, we translate the English similarity and analogy test sets to the other languages by using Google Cloud Translation API 6 .",
"However, an English word may be translated to multiple words in another language (e.g., compound nouns).",
"We discard questions containing such words (see Table 3 for details).",
"Because all approaches are compared on the same test set for each language, the comparisons are fair.",
"We compare the proposed approach with two baseline methods, GloVe and SGNS.",
"The imple-6 https://cloud.google.com/translate 1029 Dutch (nl) Similarity task Analogy task Word embedding WS353 Similarity Relatedness M. Turk MEN 3CosAdd 3CosMul GloVe 35.4 35.0 41.7 44.3 11 21.2 20.2 SGNS 51.9 52.9 53.5 49.8 15.4 22.1 23.6 PU-learning 53.7 53.4 55.1 46.7 16.4 23.5 24.7 Danish (da) Similarity task Analogy task Word embedding WS353 Similarity Relatedness M. Turk MEN 3CosAdd 3CosMul GloVe 25.7 18.4 40.3 49.0 16.4 25.8 24.3 SGNS 49.7 47.1 52.1 51.5 22.4 22.0 21.2 PU-learning 53.5 49.5 59.3 51.7 22.7 22.6 22.8 Czech (cs) Similarity task Analogy task Word embedding WS353 Similarity Relatedness M. Turk MEN 3CosAdd 3CosMul GloVe 34.3 23.2 48.9 36.5 16.2 8.9 8.6 SGNS 51.4 42.7 61.1 44.2 21.3 10.4 9.8 PU-learning 54.0 45.4 65.3 46.2 21.7 9.9 10.1 English (en) Similarity task Analogy task Word embedding WS353 Similarity Relatedness M. Turk MEN 3CosAdd 3CosMul GloVe 47.9 52.1 49.5 58.8 19.1 34.3 32.6 SGNS 65.7 67.1 66.5 62.8 26.1 31.2 27.4 PU-learning 67.0 66.7 69.6 59.4 22.4 39.2 38.8 Table 4 : Performance of SGNS, GloVe, and the proposed PU-Learning model in four different languages.",
"Results show that the proposed PU-Learning model outperforms SGNS and GloVe in most cases when the size of corpus is relatively small (around 50 million tokens).",
"Star indicates it is significant better than the second best algorithm in the same column according to Wilcoxon signed-rank test.",
"( p < 0 . 05 ).",
"mentations of Glove 7 and SGNS 8 and provided by the original authors, and we apply the default settings when appropriate.",
"The proposed PU-Learning framework is implemented based on Yu et al. (2017a).",
"With the implementation of efficient update rules, our model requires less than 500 seconds to perform one iteration over the entire text8 corpus, which consists of 17 million tokens 9 .",
"All the models are implemented in C++.",
"We follow Levy et al. (2015) 10 to set windows size as 15, minimal count as 5, and dimension of word vectors as 300 in the experiments.",
"Training word embedding models involves selecting several hyper-parameters.",
"However, as the word embeddings are usually evaluated in an unsupervised setting (i.e., the evaluation data sets are not seen during the training), the parameters should not be tuned on each dataset.",
"To conduct a fair comparison, we tune hyper-parameters on the text8 dataset.",
"For GloVe model, we tune the discount parameters x max and find that x max = 10 per-7 https://nlp.stanford.edu/projects/glove 8 https://code.google.com/archive/p/word2vec/ 9 http://mattmahoney.net/dc/text8.zip 10 https://bitbucket.org/omerlevy/hyperwords forms the best.",
"SGNS has a natural parameter k which denotes the number of negative samples.",
"Same as Levy et al. (2015), we found that setting k to 5 leads to the best performance.",
"For the PU-learning model, and are two important parameters that denote the unified weight of zero entries and the weight of regularization terms, respectively.",
"We tune in a range from 2 1 to 2 14 and in a range from 2 0 to 2 10 .",
"We analyze the sensitivity of the model to these hyper-parameters in the experimental result section.",
"The best performance of each model on the text8 dataset is shown in the Table 2.",
"It shows that PU-learning model outperforms two baseline models.",
"We compared the proposed PU-Learning framework with two popular word embedding models SGNS (Mikolov et al., 2013b) and Glove (Pen-nington et al., 2014) on English and three other languages.",
"The experimental results are reported in Table 4.",
"The results show that the proposed PU-Learning framework outperforms the two baseline approaches significantly in most datasets.",
"This re-1030 Figure 1 : Performance change as the corpus size growing",
"(a) on the Google word analogy task (on the left-hand side) and",
"(b) on the WS353 word similarity task (on the right-hand side).",
"We demonstrate the performance on four languages, Dutch, Danish, Czech and English datasets.",
"Results show that PU-Learning model consistently outperforms SGNS and GloVe when the size of corpus is small.",
"Figure 2 : Impact of and in the PU-Learning framework.",
"sults confirm that the unobserved word pairs carry important information and the PU-Learning model leverages such information and achieves better performance.",
"To better understand the model, we conduct detailed analysis as follows.",
"Performance v.s. Corpus size We investigate the performance of our algorithm with respect to different corpus size, and plot the results in Figure 1.",
"The results in analogy task are obtained by 3CosMul method (Levy and Goldberg, 2014a).",
"As the corpus size grows, the performance of all models improves, and the PU-learning model consistently outperforms other methods in all the tasks.",
"However, with the size of the corpus increases, the difference becomes smaller.",
"This is reasonable as when the corpus size increases the number of nonzero terms becomes smaller and the PU-learning approach is resemblance to Glove.",
"Impacts of and We investigate how sensitive the model is to the hyper-parameters, and .",
"Figure 2 shows the performance along with various values of and when training on the text8 corpus, respectively.",
"Note that the x-axis is in log scale.",
"When is fixed, a big degrades the performance of the model significantly.",
"This is because when is too big the model suffers from under-fitting.",
"The model is less sensitive when is small and in general, = 2 11 achieves consistently good performance.",
"When is fixed, we observe that large (e.g., 2 4 ) leads to better performance.",
"As represents the weight assigned to the unobserved term, this result confirms that the model benefits from using the zero terms in the co-occurrences matrix.",
"In this paper, we presented a PU-Learning framework for learning word embeddings of low-resource languages.",
"We evaluated the proposed approach on English and other three languages and showed that the proposed approach outperforms other baselines by effectively leveraging the information from unobserved word pairs.",
"In the future, we would like to conduct experiments on other languages where available text corpora are relatively hard to obtain.",
"We are also interested in applying the proposed approach to domains, such as legal documents and clinical notes, where the amount of accessible data is small.",
"Besides, we plan to study how to leverage other information to facilitate the training of word embeddings under the low-resource setting.",
"This work was supported in part by National Science Foundation Grant IIS-1760523, IIS-1719097 and an NVIDIA Hardware Grant."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"result",
"objective",
"method",
"other"
] |
[
"We propose a context-dependent model to map utterances within an interaction to executable formal queries.",
"To incorporate interaction history, the model maintains an interaction-level encoder that updates after each turn, and can copy sub-sequences of previously predicted queries during generation.",
"Our approach combines implicit and explicit modeling of references between utterances.",
"We evaluate our model on the ATIS flight planning interactions, and demonstrate the benefits of modeling context and explicit references.",
"The meaning of conversational utterances depends strongly on the history of the interaction.",
"Consider a user querying a flight database using natural language (Figure 1).",
"Given a user utterance, the system must generate a query, execute it, and display results to the user, who then provides the next request.",
"Key to correctly mapping utterances to executable queries is resolving references.",
"For example, the second utterance implicitly depends on the first, and the reference ones in the third utterance explicitly refers to the response to the second utterance.",
"Within an interactive system, this information needs to be composed with mentions of database entries (e.g., Seattle , next Monday ) to generate a formal executable representation.",
"In this paper, we propose encoder-decoder models that directly map user utterances to executable queries, while considering the history of the interaction, including both previous utterances and their generated queries.",
"Reasoning about how the meaning of an utterance depends on the history of the interaction is critical to correctly respond to user requests.",
"As interactions progress, users may omit previously-mentioned constraints and entities, and an increas-show me flights from seattle to boston next monday [Table with 31 flights] on american airlines [Table with 5 flights] which ones arrive at 7pm [No flights returned] show me delta flights [Table with 5 flights] . . . Figure 1: An excerpt of an interaction from the ATIS flight planning system (Hemphill et al., 1990; Dahl et al., 1994).",
"ing portion of the utterance meaning must be derived from the interaction history.",
"Figure 2 shows SQL queries for the utterances in Figure",
"1. As the interaction progresses, the majority of the generated query is derived from the interaction history (underlined), rather than from the current utterance.",
"A key challenge is resolving what past information is incorporated and how.",
"For example, in the figure, the second utterance depends on the set of flights defined by the first, while adding a new constraint.",
"The third utterance further refines this set by adding a constraint to the constraints from both previous utterances.",
"In contrast, the fourth utterance refers only to the first one, and skips the two utterances in between.",
"1 Correctly generating the fourth query requires understanding that the time constraint ( at 7pm ) can be ignored as it follows an airline constraint that has been replaced.",
"We study complementary methods to enable this type of reasoning.",
"The first set of methods implicitly reason about references by modifying the encoder-decoder architecture to encode information from previous utterances for generation decisions.",
"We experiment with attending over previous utterances and using an interaction-level recurrent encoder.",
"We also study explicitly maintaining a set of referents using segments from pre-1 An alternative explanation is that utterance four refers to utterance three, and deletes the time and airline constraints.",
"vious queries.",
"At each step, the decoder chooses whether to output a token or select a segment from the set, which is appended to the output in a single decoding step.",
"In addition to enabling references to previously mentioned entities, sets, and constraints, this method also reduces the number of generation steps required, illustrated by the underlined segments in Figure",
"2. For example, the query y 2 will require 17 steps instead of 94 .",
"We evaluate our approach using the ATIS (Hemphill et al., 1990; Dahl et al., 1994) task, where a user interacts with a SQL flight database using natural language requests, and almost all queries require joins across multiple tables.",
"In addition to reasoning about contextual phenomena, we design our system to effectively resolve database values, including resolution of time expressions (e.g., next monday in Figure 1) using an existing semantic parser.",
"Our evaluation shows that reasoning about the history of the interaction is necessary, relatively increasing performance by 28 .",
"6% over a baseline with no access to this information, and that combining the implicit and explicit methods provides the best performance.",
"Furthermore, our analysis shows that our full approach maintains its performance as interaction length increases, while the performance of systems without explicit modeling deteriorates.",
"Our code is available at https://github.com/clic-lab/atis .",
"Our goal is to map utterances in interactions to formal executable queries.",
"We evaluate our approach with the ATIS corpus (Hemphill et al., 1990; Dahl et al., 1994), where users query a realistic flight planning system using natural language.",
"The system responds by displaying tables and database entries.",
"User utterances are mapped to SQL to query a complex database with 27 tables and 162 K entries.",
"96 .",
"6% of the queries require joins of different tables.",
"Section 7 describes ATIS.",
"Task Notation Let I be the set of all interactions, X the set of all utterances, and Y the set of all formal queries.",
"A user utterance x X of length | x | is a sequence h x 1 , . . . , x | x | i , where each x i is a natural language token.",
"A formal query y Y of length | y | is a sequence h y 1 , . . . , y | y | i , where each y i is a formal query token.",
"An interaction I I is a sequence of n utterance-query pairs h ( x 1 , y 1 ) , . . . , ( x n , y n ) i representing an interaction with n turns.",
"To refer to indexed interactions and their content, we mark I ( l ) as an interaction with index l , the i -th utterance and query in I ( l ) as x ( l ) i and y ( l ) i , and the j -th tokens in x ( l ) i and y ( l ) i as x ( l ) i,j and y ( l ) i,j .",
"At turn i , we denote the interaction history of length i 1 as I [: i 1] = h ( x 1 , y 1 ) , . . . , ( x i 1 , y i 1 ) i .",
"Given I [: i 1] and utterance x i our goal is to generate y i , while considering both x i and I [: i 1] .",
"Following the ex-2239 ecution of y i , the interaction history at turn i + 1 becomes I [: i ] = h ( x 1 , y 1 ) , . . . , ( x i , y i ) i .",
"Model Our model is based on the recurrent neural network (RNN; Elman, 1990) encoder-decoder framework with attention (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015).",
"We modify the model in three ways to reason about context from the interaction history by attending over previous utterances (Section 4.2), adding a turn-level recurrent encoder that updates after each turn (Section 4.3), and adding a mechanism to copy segments of queries from previous utterances (Section 4.4).",
"We also design a scoring function to score values that are abstracted during pre-processing, including entities and times (Section 6).",
"The full model selects between generating query tokens and copying complete segments from previous queries.",
"Learning We assume access to a training set that contains N interactions { I ( l ) } Nl =1 .",
"We train using a token-level cross-entropy objective (Sec-tion 5).",
"For models that use the turn-level encoder, we construct computational graphs for the entire interaction and back-propagate the loss for all queries together.",
"Without the turn-level encoder, each utterance is processed separately.",
"Evaluation We evaluate using a test set { I ( l ) } Ml =1 of M interactions.",
"We measure the accuracy of each utterance for each test interaction against the annotated query and its execution result.",
"For models that copy segments from previous queries, we evaluate using both predicted and gold previous queries.",
"Mapping sentences to formal representations, commonly known as semantic parsing, has been studied extensively with linguistically-motivated compositional representations, including variable-free logic (e.g., Zelle and Mooney, 1996; Clarke et al., 2010), lambda calculus (e.g., Zettlemoyer and Collins, 2005; Artzi and Zettlemoyer, 2011; Kushman and Barzilay, 2013), and dependency-based compositional semantics (e.g., Liang et al., 2011; Berant et al., 2013).",
"Recovering lambda-calculus representations was also studied with ATIS with focus on context-independent meaning using grammar-based approaches (Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2011; Wang et al., 2014) and neural networks (Dong and Lapata, 2016; Jia and Liang, 2016).",
"Recovering context-independent executable representations has been receiving increasing attention.",
"Mapping sentence in isolation to SQL queries has been studied with ATIS using statistical parsing (Popescu et al., 2004; Poon, 2013) and sequence-to-sequence models (Iyer et al., 2017).",
"Generating executable programs was studied with other domains and formal languages (Giordani and Moschitti, 2012; Ling et al., 2016; Zhong et al., 2017; Xu et al., 2017).",
"Recently, various approaches were proposed to use the formal language syntax to constrain the search space (Yin and Neubig, 2017; Rabinovich et al., 2017; Krishnamurthy et al., 2017; Cheng et al., 2017) making all outputs valid programs.",
"These contributions are orthogonal to ours, and can be directly integrated into our decoder.",
"Generating context-dependent formal representations has received less attention.",
"Miller et al. (1996) used ATIS and mapped utterances to semantic frames, which were then mapped to SQL queries.",
"For learning, they required full supervision, including annotated parse trees and contextual dependencies.",
"2 Zettlemoyer and Collins (2009) addressed the problem with lambda calculus, using a semantic parser trained separately with context-independent data.",
"In contrast, we generate executable formal queries and require only interaction query annotations for training.",
"Recovering context-dependent meaning was also studied with the SCONE (Long et al., 2016) and SequentialQA (Iyyer et al., 2017) corpora.",
"We compare ATIS to these corpora in Section 7.",
"Resolving explicit references, a part of our problem, has been studied as co-reference resolution (Ng, 2010).",
"Context-dependent language understanding was also studied for dialogue systems, including with ATIS, as surveyed by Tur et al. (2010).",
"More recently, encoder-decoder methods were applied to dialogue systems (Peng et al., 2017; Li et al., 2017), including using hierarchical RNNs (Serban et al., 2016, 2017), an architecture related to our turn-level encoder.",
"These approaches use slot-filling frames with limited expressivity, while we focus on the original representation of unconstrained SQL queries.",
"2 Miller et al. (1996) provide limited details about their evaluation.",
"Later work notes that they evaluate SQL query correctness (Zettlemoyer and Collins, 2009) with an accuracy of 78 .",
"4% , higher than our results.",
"However, the lack of details (e.g., if the metric is strict or relaxed) makes comparison difficult.",
"In addition, we use significantly less supervision, and re-split the data to avoid scenario bias (Section 7).",
"We base our model on an encoder-decoder architecture with attention (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015).",
"At each interaction turn i , given the current utterance x i and the interaction history I [: i 1] , the model generates the formal query y i .",
"Figure 3 illustrates our architecture.",
"We describe the base architecture, and gradually add components.",
"Our base architecture uses an encoder to process the user utterance x i = h x i, 1 , . . . , x i, | x i | i and a decoder to generate the output query y i token-by-token.",
"This architecture does not observe the interaction history I [: i 1] .",
"The encoder computes a hidden state h Ej = [ h Ej ; h Ej ] for each token x i,j using a bi-directional RNN.",
"The forward RNN is defined by: 3 h Ej = LSTM E (cid:16) x ( x i,j ); h Ej 1 (cid:17) , (1) where LSTM E is a long short-term memory recurrence (LSTM; Hochreiter and Schmidhuber, 1997) and x is a learned embedding function for input tokens.",
"The backward RNN recurs in the opposite direction with separate parameters.",
"We generate the query with an RNN decoder.",
"The decoder state at step k is: h Dk = LSTMD (cid:16) [ y ( y i,k 1 ); c k 1 ] ; h Dk 1 (cid:17) , where LSTMD is a two-layer LSTM recurrence, y is a learned embedding function for query tokens, and c k is an attention vector computed from the encoder states.",
"The dis-2241 show me flights from seattle to boston which ones arrive at 7pm (SELECT DISTINCT flight.flight_id ... ); SELECT ( DISTINCT flight.flight_id FROM flight flight.airline_code = 'AA' flight.from_airport IN (SELECT airport_service.airport_code ... city.city_code FROM city WHERE city.city_name = 'SEATTLE')) x 1 : x 2 : y 1 : Discourse State on american airlines (SELECT DISTINCT flight.flight_id ... ); y 2 : x 3 : Word Embeddings Turn-Level Encoder Encoder State Segments from Previous Queries Segment Encoder Segments SQL Tokens Attention Scores Attention State Output Distribution } (SELECT DISTINCT flight.flight_id FROM flight ...",
"y i, 0 is a special start token, and c 0 is a zero-vector.",
"The initial hidden state and cell memory of each layer are initialized as h E | x i | and c E | x i | .",
"The attention vector c k is a weighted sum of the encoder hidden states: s k ( j ) = h Ej WA h Dk (2) k = softmax ( s k ) (3) c k = | x i | X j =1 h Ej k ( j ) , (4) where WA is a learned matrix.",
"The probabilities of output query tokens are computed as: m k = tanh (cid:16) [ h Dk ; c k ] W m (cid:17) (5) P ( y i,k = w | x i , y i, 1: k 1 ) exp( m k W ow + b ow ) (6) where W m , W o , and b o are learned.",
"3 We omit the memory cell (often denoted as c j ) from all LSTM descriptions.",
"We use only the LSTM hidden state h j in other parts of the architecture unless explicitly noted.",
"We provide the model with the most recent interaction history by concatenating the previous h utterances h x i h , ..., x i 1 i with the current utterance in order, adding a special delimiter token between each utterance.",
"The concatenated input provides the model access to previous utterances, but not to previously generated queries, or utterances that are more than h turns in the past.",
"The architecture remains the same, except that the encoder and attention are computed over the concatenated sequence of tokens.",
"The probability of an output query token is computed the same, but is now conditioned on the interaction history: P ( y i,k = w | x i , y i, 1: k 1 , I [: i 1]) (7) exp( m k W ow + b ow ) .",
"Concatenating recent utterances to provide access to recent history has computational drawbacks.",
"The encoding of the utterance depends on its location in the concatenated string.",
"This requires encoding all recent history for each new utterance, and does not allow re-use of computation between utterances during encoding.",
"It also introduces a tradeoff between computation cost and expressivity: attending over the h previous utterances allows the decoder access to the information in these utterances when generating a query, but is computationally more expensive as h increases.",
"We address this by encoding each utterance once.",
"To account for the influence of the interaction history on utterance encoding, we maintain a discourse state encoding h Ii computed with a turn-level recurrence, and use it during utterance encoding.",
"The state is maintained and updated over the entire interaction.",
"At turn i , this model has access to the complete prefix of the interaction I [: i 1] and the current request x i .",
"In contrast, the concatenation-based encoder (Section 4.2) has access only to information from the previous h utterances.",
"We also use positional encoding in the attention computation to account for the position of each utterance relative to the current utterance.",
"Formally, we modify Equation 1 to encode x i : h Ei,j = LSTM E (cid:16)h x ( x i,j ); h Ii 1 i ; h Ei,j 1 (cid:17) , where h Ii 1 is the discourse state following utterance x i 1 .",
"LSTM E is modified analogously.",
"In contrast to the concatenation-based model, the recurrence processes a single utterance.",
"(cid:16) (cid:17)",
"Similar to the concatenation-based model, we attend over the current utterance and the h previous utterances.",
"We add relative position embeddings I to each hidden state.",
"These embeddings are learned for each possible distance 0 , . . . , h 1 from the current utterance.",
"We modify Equation 2 to index over both utterances and tokens: s k ( t, j ) = h h Et,j ; I ( i t ) i WA h Dk .",
"(8) In contrast to the concatenation model, without position embeddings, the attention computation has no indication of the utterance position, as our ablation shows in Section 8.",
"The attention distribution is computed as in Equation 3, and normalized across all utterances.",
"The position embedding is also used to compute the context vector c k : c k = i X t = i h | x t | X j =1 h h Et,j ; I ( i t ) i k ( t, j ) .",
"The discourse state and attention over previous utterances allow the model to consider the interaction history when generating queries.",
"However, we observe that context-dependent reasoning often requires generating sequences that were generated in previous turns.",
"Figure 2 shows how segments (underlined) extracted from previous utterances are predominant in later queries.",
"To take advantage of what was previously generated, we add copying of complete segments from previous queries by expanding the set of outputs at each generation step.",
"This mechanism explicitly models references, reduces the number of steps required to generate queries, and provides an interpretable view of what parts of a query originate in context.",
"Figure 3 illustrates this architecture.",
"Extracting Segments Given the interaction history I [: i 1] , we construct the set of segments S i 1 by deterministically extracting subtrees from previously generated queries.",
"4 In our data, we extract 13 5 .",
"9 ( ) segments for each annotated query.",
"Each segment s S i 1 is a tuple h a, b, l, r i , where a and b are the indices of the first and most recent queries, y a and y b , in the interaction that contain the segment.",
"l and r are the start and end indices of the segment in y b .",
"Encoding Segments We represent a segment s = h a, b, l, r i using the hidden states of an RNN encoding of the query y b .",
"The hidden states h h Q 1 , ..., h Q | y b | i are computed using a bi-directional LSTM RNN similar to the utterance encoder (Equation 1), except using separate LSTM parameters and y to embed the query tokens.",
"The embedded representation of a segment is a concatenation of the hidden states at the segment endpoints and an embedding of the relative position of the utterance where it appears first: h S = h h Ql ; h Qr ; g (min( g, i a )) i , where g is a learned embedding function of the position of the initial query y a relative to the current turn index i .",
"We learn an embedding for each relative position that is smaller than g , and use the same embedding for all other positions.",
"Generation with Segments At each generation step, the decoder selects between a single query token or a segment.",
"When a segment is selected, it 4 The process of extracting sub-trees is described in the supplementary material.",
"is appended to the generated query, an embedded segment representation for the next step is computed, and generation continues.",
"The probability of a segment s = h a, b, l, r i at decoding step k is: P ( y i,k = s | x i , y i, 1: k 1 , I [: i 1]) (9) exp (cid:16) m k WS h S (cid:17) , where m k is computed in Equation 5 and WS is a learned matrix.",
"To simplify the notation, we assign the segment to a single output token.",
"The output probabilities (Equations 7 and 9) are normalized together to a single probability distribution.",
"When a segment is selected, the embedding used as input for the next generation step is a bag-of-words encoding of the segment.",
"We extend the output token function y to take segments: y ( s = h a, b, l, r i ) = 1 r l r X k = l y ( y b,k ) .",
"Given an utterance x i and the history of interaction I [: i 1] , we generate the query y i .",
"An interaction starts with the user providing the first utterance x 1 .",
"The utterance is encoded using the initial discourse state h I 0 , the discourse state h I 1 is computed, the query y 1 is generated, and the set of segments S 1 is created.",
"The initial discourse state h I 0 is learned, and the set of segments S 0 used when generating y 1 is the empty set.",
"The attention is computed only over the first utterance because no previous utterances exist.",
"The user then provides the next utterance or concludes the interaction.",
"At turn i , the utterance x i is encoded using the discourse state h Ii 1 , the discourse state h Ii is computed, and the query y i is generated using the set of segments S i 1 .",
"The model has no access to future utterances.",
"We use greedy inference for generation.",
"Figure 3 illustrates a single decoding step.",
"We assume access to a training set of N interactions { I ( l ) } Nl =1 .",
"Given an interaction I ( l ) , each utterance x ( l ) i where 1 i | I ( l ) | , is paired with an annotated query y ( l ) i .",
"The set of segments from previous utterances is deterministically extracted from the annotated queries during learning.",
"However, the data does not indicate what parts of each query originate in segments copied from previous utterances.",
"We adopt a simple approach and heuristically identify context-dependent segments based on entities that appear in the utterance and the query.",
"5 Once we identify a segment in the annotated query, we replace it with a unique placeholder token, and it appears to the learning algorithm as a single generation decision.",
"Treating this decision as latent is an important direction for future work.",
"Given the segment copy decisions, we minimize the token cross-entropy loss: L ( y ( l ) i,k ) = log P (cid:16) y ( l ) i,k | x ( l ) i , y ( l ) i, 1: k 1 , I ( l ) [: i 1] (cid:17) , where k is the index of the output token.",
"The base and recent-history encoders (Sections 4.1 and 4.2) can be trained by processing each utterance separately.",
"For these models, given a mini-batch B of utterances, each identified by an interaction-utterance index pair, the loss is the mean token loss L = 1 P ( i,j ) B | y ( j ) i | X ( i,j ) B | y ( j ) i | X k =1 L ( y ( j ) i,k ) .",
"The turn-level encoder (Section 4.3) requires building a computation graph for the entire interaction.",
"We update the model parameters for each interaction.",
"The interaction loss is L = n B 1 P ni =1 | y ( j ) i | n X i =1 | y ( j ) i | X k =1 L ( y ( j ) i,k ) , where B is the batch size, and nB re-normalizes the loss so the gradient magnitude is not dependent on the number of utterances in the interaction.",
"Our ablations ( batch re-weight in Table 2) shows the importance of this term.",
"For both cases, we use teacher forcing (Williams and Zipser, 1989).",
"An important practical consideration for generation in ATIS and other database domains is reasoning about database values, such as entities, times, and dates.",
"For example, the first utterance in Figure 2 includes two entities and a date reference.",
"With limited data, learning to both reason about a large number of entities and to resolve dates are challenging for neural network models.",
"Following previous work (Dong and Lapata, 2016; Iyer et al., 2017), we address this with anonymization, where the data is preand post-processed to abstract over tokens that can be heuristically resolved to tokens in the query language.",
"In contrast to previous work, we design a special scoring function to anonymized tokens to reflect how they are used in the input utterances.",
"Figure 4 illustrates preprocessing in ATIS.",
"For example, we use a temporal semantic parser to resolve dates (e.g., next 5 The alignment is detailed in the supplementary material.",
"Monday ) and replace them with day, month, and year placeholders.",
"To anonymize database entries, we use a dictionary compiled from the database (e.g., to map Seattle to SEATTLE ).",
"The full details of the anonymization procedure are provided in the supplementary material.",
"Following preprocessing, the model reasons about encoding and generation of anonymized tokens (e.g., CITY#1 ) in addition to regular output tokens and query segments from the interaction history.",
"Anonymized tokens are typed (e.g., CITY ), map to a token in the query language (e.g., 'BOSTON' ), and appear both in input utterances and generated queries.",
"We modify our encoder and decoder embedding functions ( x and y ) to map anonymized tokens to the embeddings of their types (e.g., CITY ).",
"The type embeddings in x and y are separate.",
"Using the types only, while ignoring the indices, avoids learning biases that arise from the arbitrary ordering of the tokens in the training data.",
"However, it does not allow distinguishing between entries with the same type for generation decisions; for example, the common case where multiple cities are mentioned in an interaction.",
"We address this by scoring anonymized token based on the magnitude of attention assigned to them at generation step k .",
"The attention magnitude is computed from the encoder hidden states.",
"This computation considers both the decoder state and the location of the anonymized tokens in the input utterances to account for how they are used in the interaction.",
"The probability of an anonymized token w at generation step k is P ( y i,k = w | x i , y i, 1: k 1 , I [: i 1]) i X t = i h | x t | X j =1 (exp ( s k ( t, j ))) where s k ( t, j ) is the attention score computed in Equation 8.",
"This probability is normalized to-Mean/max utterances per interaction 7 .",
"0 / 64 Mean/max tokens per utterance 10 .",
"2 / 47 Mean/max token per SQL query 102 .",
"9 / 1286 Input vocabulary size 1582 Output vocabulary size 982 Table 1: ATIS data statistics.",
"gether with the probabilities in Equations 7 and 9 to form the complete output probability.",
"Hyperparameters, architecture details, and other experimental choices are detailed in the supplementary material.",
"Data We use ATIS (Hemphill et al., 1990; Dahl et al., 1994) to evaluate our approach.",
"The data was originally collected using wizard-of-oz experiments, and annotated with SQL queries.",
"Each interaction was based on a scenario given to a user.",
"We observed that the original data split shares scenarios between the train, development, and test splits.",
"This introduces biases, where travel patterns that appeared during training repeat in testing.",
"For example, a model trained on the original data split often correctly resolves the exact referenced by on Saturday with no pre-processing or access to the document date.",
"We evaluate this overfitting empirically in the supplementary material.",
"We re-split the data to avoid this bias.",
"We evenly distribute scenarios across splits so that each split contains both scenarios with many and few representative interactions.",
"The new split follows the original split sizes with 1148 / 380 / 130 train/dev/test interactions.",
"Table 1 shows data statistics.",
"The system uses a SQL database of 27 tables and 162 K entries.",
"96 .",
"6% of the queries require at least one join, and 93% at least two joins.",
"The most related work on ATIS to ours is Miller et al. (1996), which we discuss in Section 3.",
"The most related corpora to ATIS are SCONE (Long et al., 2016) and SequentialQA (Iyyer et al., 2017).",
"SCONE (Long et al., 2016) contains micro-domains consisting of stackor list-like elements.",
"The formal representation is linguistically-motivated and the majority of queries include a single binary predicate.",
"All interactions include five turns.",
"SequentialQA (Iyyer et al., 2017) contains sequences of questions on a single Wikipedia table.",
"Interactions are on average 2 .",
"9 turns long, and were created by re-phrasing a question from a context-independent corpus (Pasupat and Liang, 2015).",
"In contrast, ATIS uses a significantly larger 2244 database, requires generating complex queries with multiple joins, includes longer interactions, and was collected through interaction with users.",
"The supplementary material contains analysis of the contextual phenomena observed in ATIS.",
"Pre-processing We pre-process the data to identify and anonymize entities (e.g., cities), numbers, times, and dates.",
"We use string matching heuristics to identify entities and numbers, and identify and resolve times and dates using UWTime (Lee et al., 2014).",
"When resolving dates we use the original interaction date as the document time.",
"The supplementary material details this process.",
"Metrics We evaluate using query accuracy, strict denotation accuracy, and relaxed denotation accuracy.",
"Query accuracy is the percentage of predicted queries that match the reference query.",
"Strict denotation accuracy is the percentage of predicted queries that execute to exactly the same table as the reference query.",
"In contrast to strict, relaxed gives credit to a prediction query that fails to execute if the reference table is empty.",
"In cases when the utterance is ambiguous and there are multiple gold queries, we consider the query or table correct if they match any of the gold labels.",
"Systems We evaluate four systems:",
"(a) SEQ 2 SEQ -0: the baseline encoder-decoder model (Section 4.1);",
"(b) SEQ 2 SEQ-H : encoder-decoder with attention on current and previous utterances (Section 4.2);",
"(c) S 2 S + ANON : encoder-decoder with attention on previous utterances and anonymization scoring (Section 6); and",
"(d) FULL : the complete approach including segment copying (Section 4.4).",
"For FULL , we evaluate with predicted and gold (FULL-GOLD ) previous queries, and without attention on previous utterances (FULL -0).",
"All models except SEQ 2 SEQ -0 and FULL -0 use h = 3 previous utterances.",
"We limit segment copying to segments that appear in the most recent query only.",
"6 Unless specifically ablated, all experiments use pre-processing.",
"Table 2 shows development and test results.",
"We run each experiment five times and report mean and standard deviation.",
"The main metric we focus on is strict denotation accuracy.",
"The relatively low performance of SEQ 2 SEQ -0 demon-6 While we only use segments from the most recent query, they often appear for the first time much earlier in the interaction, which influences their absolute position value a .",
"strates the need for context in this task.",
"Attending on recent history significantly increases performance.",
"Both SEQ 2 SEQ models score anonymized tokens as regular vocabulary tokens.",
"Adding anonymized token scoring further increases performance ( S 2 S + ANON ).",
"FULL -0 and FULL add segment copying and the turn-level encoder.",
"The relatively high performance of FULL -0 shows that substituting segment copying with attention maintains and even improves the system effectiveness.",
"However, the best performance is provided with FULL , which combines both.",
"This shows the ben-efit of redundancy in accessing contextual information.",
"Unlike the other systems, both FULL and FULL -0 suffer from cascading errors due to selecting query segments from previously incorrect predictions.",
"The higher FULL-GOLD performance illustrates the influence of error propagation.",
"While part of this error can be mitigated by having both attention and segment copying, this behavior is unlikely to be learned from supervised learning, where errors are never observed.",
"Ablations show that all components contribute to the system performance.",
"Performance drops when using a concatenation-based encoder instead of the turn-level encoder ( turn-level enc.; Section 4.3).",
"Using batch-reweighting ( batch-reweight; Section 5) and input position embeddings ( input pos. embs.; Section 4.3) are critical to the performance of the turn-level encoder.",
"Removing copying of query segments 2245 0 5 10 15 20 30 45 60 75",
"from the interaction history lowers performance ( query segments; Section 4.4).",
"Treating indexed anonymized tokens as regular tokens, rather than using attention-based scoring and type embeddings, lowers performance ( anon. scoring; Section 6).",
"Finally, pre-processing, which includes anonymization, is critical ( pre-processing).",
"Figure",
"5(a) shows the performance as interactions progress.",
"All systems show a drop in performance after the first utterance, which is always context-independent.",
"As expected, SEQ 2 SEQ -0 shows the biggest drop.",
"The FULL approach is the most stable as the interaction progresses.",
"Figure",
"5(b) shows the performance as we decrease the number of previous utterances used for attention h .",
"Without the turn-level encoder and segment copying ( SEQ 2 SEQ-H and S 2 S + ANON ), performance decreases significantly as h decreases.",
"In contrast, the FULL model shows a smaller decrease ( 1 . 5% ).",
"The supplementary material includes attention analysis demonstrating the importance of previous-utterance attention.",
"However, attending on fewer utterances improves inference speed: FULL -0 is 30% faster than FULL .",
"Finally, while we re-split the data due to scenario sharing between train and test early in development and used this split only for development, we also evaluate on the original split (Table 3).",
"We report mean and standard deviation over three trials.",
"The high performance of S 2 S + ANON potentially indicates it benefits more from the differences between the splitting procedures.",
"We analyze errors made by the full model on thirty development interactions.",
"When analyzing the output of FULL , we focus on error propagation and analyze predictions that resulted in an incorrect table when using FULL , but a correct table when using FULL-GOLD .",
"56 .",
"7% are due to selection of a segment that contained an incorrect constraint.",
"43 .",
"4% of the errors are caused by a necessary segment missing during generation.",
"93 .",
"0% of all predictions are valid SQL and follow the database schema.",
"We also analyze the errors of FULL-GOLD .",
"We observe that 30 .",
"0% of errors are due to generating constraints that were not mentioned by the user.",
"Other common errors include generating relevant constraints with incorrect values ( 23 . 3% ) and missing constraints ( 23 . 3% ).",
"We also evaluate our model's ability to recover long-distance references while constraints are added, changed, or removed, and when target attributes change.",
"The supplementary material includes the analysis details.",
"In general, the model resolves references well.",
"However, it fails to recover constraints mentioned in the past following a user focus state change (Grosz and Sidner, 1986).",
"We study models that recover context-dependent executable representations from user utterances by reasoning about interaction history.",
"We observe that our segment-copying models suffer from error propagation when extracting segments from previously-generated queries.",
"This could be mitigated by training a model to ignore erroneous segments, and recover by relying on attention for generation.",
"However, because supervised learning does not expose the model to erroneous states, a different learning approach is required.",
"Our analysis demonstrates that our model is relatively insensitive to interaction length, and is able to recover both explicit and implicit references to previously-mentioned entities and constraints.",
"Further study of user focus change is required, an important phenomenon that is relatively rare in ATIS.",
"This research was supported by the NSF (CRII-1656998), Schmidt Sciences, a gift from Google, and cloud computing credits from Amazon.",
"We thank Valts Blukis, Luke Zettlemoyer, and the anonymous reviewers for their helpful comments."
] | [
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"result",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"method",
"other",
"method",
"method",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Abstract Multimodal machine translation and textual chat translation have received considerable attention in recent years.",
"Although the conversation in its natural form is usually multimodal, there still lacks work on multimodal machine translation in conversations.",
"In this work, we introduce a new task named M ultimodal C hat T ranslation (MCT), aiming to generate more accurate translations with the help of the associated dialogue history and visual context.",
"To this end, we firstly construct a M ultimodal S entiment C hat T ranslation D ataset (MSCTD) containing 142,871 English-Chinese utterance pairs in 14,762 bilingual dialogues and 30,370 English-German utterance pairs in 3,079 bilingual dialogues.",
"Each utterance pair, corresponding to the visual context that reflects the current conversational scene, is annotated with a sentiment label.",
"Then, we benchmark the task by establishing multiple baseline systems that incorporate multimodal and sentiment features for MCT.",
"Preliminary experiments on four language directions (English Chinese and English German) verify the potential of contextual and multimodal information fusion and the positive impact of sentiment on the MCT task.",
"Additionally, as a by-product of the MSCTD, it also provides two new benchmarks on multimodal dialogue sentiment analysis.",
"Our work can facilitate research on both multimodal chat translation and multimodal dialogue sentiment analysis.",
"1 1 Introduction Multimodal machine translation (Huang et al., 2016; Calixto and Liu, 2017) and textual chat translation (Wang et al., 2016; Farajian et al., 2020; Liang et al., 2021a) mainly focus on investigating Equal contribution.",
"the potential visual features and dialogue context, respectively.",
"Both of them have received much attention.",
"Although plenty of studies on them have been carried out based on either image captions (Calixto et al., 2017, 2019; Ive et al., 2019; Yin et al., 2020; Yao and Wan, 2020) or textual dialogues (Wang et al., 2017; Maruf et al., 2018; Liang et al., 2021c), to our knowledge, little research work has been devoted to multimodal machine translation in conversations.",
"One important reason is the lack of multimodal bilingual conversational datasets.",
"Generally, conversation in its natural form is multimodal (Poria et al., 2019; Liang et al., 2021b).",
"When humans converse, what a speaker would say next depends largely on what he/she sees.",
"That is, the visual information plays a key role in ( i ) supplementing some crucial scene information ( e.g. , the specific locations or objects, or facial expressions), ( ii ) resolving ambiguous multi-sense words ( e.g. , bank), and ( iii ) addressing pronominal anaphora issues ( e.g. , it/this).",
"For instance, as shown in Fig. 1",
"(a), the image obviously points out the current location on the sea, which may help disambiguate the meaning of course in the utterance X 5 .",
"Specifically, the dialogue history ( i.e. , talking about mar-itime affairs) and the corresponding visual context ( i.e. , on the sea/boat) assist us to determine that the word course means route/direction instead of curriculum.",
"In Fig. 1",
"(b), the visual context indicates object information, i.e. , the defibrillator in X 1 , which may help with translation.",
"In Fig. 1",
"(c), the image of the utterance X 1 also demonstrates that it can provide appropriate candidates ( i.e., the jeans) when translating the pronoun these.",
"Besides, the image offers some clues to judge the sentiment when it is hard to judge the polarity based only on the utterance ( e.g. , Y 2 in Fig. 1",
"(b) and X 3 in Fig. 1",
"(c)).",
"All of the above call for a real-life multimodal bilingual conversational data resource that can encourage further research in chat transla-2601 Figure 1: Three examples of the annotated multimodal bilingual dialogue in our MSCTD and the conversation is going from left to right.",
"In this work, we propose a new task named M ultimodal C hat T ranslation (MCT), with the goal to produce more accurate translations by taking the dialogue history and visual context into consideration.",
"To this end, we firstly construct a M ultimodal S entiment C hat T ranslation D ataset (MSCTD).",
"The MSCTD includes over 17k multimodal bilingual conversations (more than 142k English-Chinese and 30k English-German utterance pairs), where each utterance pair corresponds with the associated visual context indicating where it happens.",
"In addition, each utterance is annotated with one sentiment label ( i.e. , posi-tive/neutral/negative).",
"Based on the constructed MSCTD, we benchmark the MCT task by establishing multiple Transformer-based (Vaswani et al., 2017) systems adapted from several advanced representative multimodal machine translation models (Ive et al., 2019; Yao and Wan, 2020) and textual chat translation models (Ma et al., 2020; Liang et al., 2021c).",
"Specifically, we incorporate multimodal features and sentiment features into these models for a suitable translation under the current conversational scene.",
"Extensive experiments on four language directions (English Chinese and English German) in terms of BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and TER (Snover et al., 2006), demonstrate the effectiveness of contextual and multimodal information fusion, and the positive impact of sentiment on MCT.",
"Furthermore, experiments on the multimodal dialogue sentiment analysis task of the three languages show the added value of the proposed MSCTD.",
"In summary, our main contributions are: We propose a new task: multimodal chat translation named MCT, to advance multimodal chat translation research.",
"We are the first that contributes the human-annotated multimodal sentiment chat translation dataset (MSCTD), which contains 17,841 multimodal bilingual conversations, totally 173,240 <English utterance, Chinese/German utterance, image, sentiment> quadruplets.",
"We implement multiple Transformer-based baselines and provide benchmarks for the new 2602 task.",
"We also conduct comprehensive analysis and ablation study to offer more insights.",
"As a by-product of our MSCTD, it also facilitates the development of multimodal dialogue sentiment analysis.",
"In this section, we firstly clarify the symbol definition, and then define the proposed Multimodal Chat Translation task and the existing Multimodal Dialogue Sentiment Analysis task.",
"In a multimodal bilingual conversation ( e.g. , Fig. 1",
"(a)), we assume the two speakers have alternatively given utterances in different languages for u turns, resulting in X 1 , X 2 , X 3 , X 4 , ..., X u and Y 1 , Y 2 , Y 3 , Y 4 , ..., Y u on the source and target sides, respectively, along with the corresponding visual context representing where it happens: Z 1 , Z 2 , Z 3 , Z 4 , ..., Z u .",
"Among these utterances, X 1 , X 3 , X 5 , ..., X u are originally spoken by the first speaker and Y 1 , Y 3 , Y 5 , ..., Y u are the corresponding translations in the target language.",
"Similarly, Y 2 , Y 4 , Y 6 , ..., Y u 1 are originally spoken by the second speaker and X 2 , X 4 , X 6 , ..., X u 1 are the translated utterances in the source language.",
"According to languages and modalities, we define three types of context: (1) the dialogue history context of X u on the source side as CX u = { X 1 , X 2 , X 3 , ..., X u 1 } , and (2) that of Y u on the target side as CY u = { Y 1 , Y 2 , Y 3 , ..., Y u 1 } , and (3) the visual dialogue context CZ u = { Z 1 , Z 2 , Z 3 , ..., Z u 1 , Z u } .",
"2 Multimodal Chat Translation.",
"When translating the u -th utterance X u = { x u, 1 , x u, 2 , ..., x u,N } , the goal of the MCT task is to generate Y u = { y u, 1 , y u, 2 , ..., y u,T } with the guidance of bilingual dialogue history contexts CX u and CY u and the associated visual context CZ u .",
"Formally, the probability distribution of the target utterance Y u is defined as follows: P ( Y u | X u , C u ) = T (cid:89) t =1 p ( y u,t | y u,<t , X u , C u ) , (1) where y u,<t = { y u, 1 , y u, 2 , y u, 3 , ..., y u,t 1 } and C u = {C X u , CY u , CZ u } .",
"Multimodal Dialogue Sentiment Analysis.",
"Taking the u -th utterance X u for example, the task aims to predict a sentiment label { Positive , Neutral , Negative } for it given the corresponding image Z u and the dialogue history CX u .",
"In this section, we mainly introduce our MSCTD in five aspects: Data Source 3.1, Annotation Procedure 3.2, Annotation Quality Assessment 3.3, Dataset Statistics 3.4, and the introduction of Related Datasets 3.5.",
"We mainly select the multimodal dialogues from the public available OpenViDial dataset (Meng et al., 2021), where each monolingual (English) utterance corresponds to an image.",
"Since the original English utterance in OpenViDial is automatically extracted from the corresponding movie image by optical character recognition (OCR) 3 , it contains a lot of noises or errors.",
"Furthermore, the lack of associated translations and sentiment labels for utterances, makes it impossible for directly conducting research on multimodal chat translation, sentiment-aware machine translation, and multimodal dialogue sentiment analysis with this data.",
"Therefore, we further correct the wrong English utterances and annotate the corresponding Chinese/German translations and sentiment labels.",
"includes two steps: automatic annotation and then human annotation according to the annotation rules.",
"Automatic Annotation.",
"To improve the annotation efficiency, we firstly construct a paired English-Chinese subtitle database 4 .",
"Then, we utilize the original English utterance to automatically select its Chinese translation by perfectly matching the English subtitle in the constructed bilingual database.",
"As a result, about 78.57% original English utterances are paired with Chinese translations.",
"4 To build this database, we firstly crawl two consecutive English and Chinese movie subtitles (not aligned) from here https://www.kexiaoguo.com/ .",
"Then, we use several advanced technologies ( e.g. , Vecalign (Thompson and Koehn, 2019) and LASER (Schwenk, 2018)) to align these subtitles.",
"Finally, we obtain the large-scale bilingual dialogue dataset (28M).",
"We will also release this dataset, together with the MSCTD, to facilitate subsequent research.",
"Human Annotation.",
"Since the full data are large, we divide the data into three parts and employ three annotators who are Chinese postgraduate students highly proficient in English comprehension.",
"Each annotator is responsible for annotating one part according to the following guidelines: Check and correct each English utterance; Check and correct the matched Chinese subtitle to suit the current conversational scene; For the remaining 21.43% (without Chinese subtitles), translate them according to the corrected English utterance, the corresponding image, and the dialogue history.",
"Additionally, we employ another three annotators to label sentiment polarity for each utterance independently ( i.e. , each one annotates the full data) according to the current utterance, the associated image and the dialogue history.",
"Following Firdaus et al. (2020), majority voting scheme is used for selecting the final sentiment label for each utterance.",
"Finally, having the conversations in both languages allows us to simulate bilingual conversations where one speaker speaks in English and the other responds in Chinese (Farajian et al., 2020; Liang et al., 2021a).",
"Fig. 1 shows three bilingual conversations where the two speakers have alternatively given utterances, along with their corresponding translations.",
"By doing so, we build the MSCTD 5 .",
"5 For English German, we firstly sample a small set of training data and apply the same test and validation set with the English Chinese version.",
"Then, the German translations are collected from professional English-German workers con-3.3 Annotation Quality Assessment To evaluate the quality of annotation, we use Fleiss' Kappa to measure the overall annotation consistency among three annotators (Fleiss and Cohen, 1973).",
"We measure this data from two aspects: translation quality and sentiment quality.",
"For translation quality, we measure the inter-annotator agreement on a subset of data (sample 50 dialogues with 504 utterances), and we ask the three annotators mentioned above to re-annotate this subset independently.",
"Then, we invite another postgraduate student to measure the inter-annotator agreement on the re-annotated subset by the three annotators.",
"Finally, the inter-annotator agreement calculated by Fleiss' kappa are 0.921 for English Chinese and 0.957 for English German, respectively.",
"They indicate Almost Perfect Agreement between three annotators.",
"For sentiment quality, we measure the inter-annotator agreement on the full dataset.",
"The inter-annotator agreements calculated by Fleiss' kappa is 0.695, which indicates Substantial Agreement between three annotators.",
"The level is consistent with previous work (Firdaus et al., 2020) which can be considered as reliable.",
"As shown in Tab.",
"1, the MSCTD contains totally 17,841 bilingual conversations and 142,871/30,370 tracted via a language service company (magicdatatech).",
"The three crwodworkers are asked to translate them according to the English utterance, the corresponding image, and the dialogue history.",
"English-Chinese/English-German utterance pairs with two modalities ( i.e. , text and image), where each utterance has been annotated with onesenti-ment label.",
"For English-Chinese/English-German, we split the dialogues into 13,749/2,066 for train, 504/504 for valid, and 509/509 for test while keeping roughly the same distribution of the utterance pair/image, respectively.",
"The detailed annotation of sentiment labels are also listed in Tab.",
"1, where three labels account for similar proportion.",
"Based on the statistics in Tab.",
"1, the average number of turns per dialogue is about 10, and the average numbers of tokens per turn are 8.2, 10.9, and 8.3 for English utterances (word level), Chinese utterances (character level), and German utterance (word level), respectively.",
"The related datasets mainly involve three research fields: multimodal machine translation, textual chat translation, and multimodal dialogue sentiment analysis.",
"In multimodal machine translation , there exists one dataset: Multi30K (Elliott et al., 2016), where each image is paired with one English caption and two human translations into German and French.",
"It is an extension of the original English description dataset: Flickr30K (Young et al., 2014).",
"Afterwards, some small-scale multimodal test sets (about 3k instances) are released to evaluate the system, such as WMT18 test set (1,071 instances) (Barrault et al., 2018).",
"In textual chat translation , three datasets have been released: BSD-AMI-ON (Rikters et al., 2020), BconTrasT (Farajian et al., 2020), and BMELD (Liang et al., 2021a).",
"The BSD-AMI-ON is a document-aligned Japanese-English conversation corpus, which contains three sub-corpora: Business Scene Dialogue (BSD (Rikters et al., 2019)), Japanese translation of AMI meeting corpus (AMI (McCowan et al., 2005)), and Japanese translation of OntoNotes 5.0 (ON (Marcus et al.)).",
"The BconTrast and BMELD are two human-annotated datasets, which are extended from monolingual textual dialogue datasets Taskmaster-1 (Byrne et al., 2019) and MELD (Poria et al., 2019), respectively.",
"In multimodal dialogue sentiment analysis , the MELD (Poria et al., 2019) and MEISD (Fir-daus et al., 2020) datasets are publicly available.",
"The MELD dataset is constructed by extending the EmotionLines (Hsu et al., 2018) from the scripts of the popular sitcom Friends .",
"It is similar to MEISD, which is also built from famous English TV shows under different genres ( e.g. , Friends , Grey's Anatomy , The Big Bang Theory ).",
"The resources mentioned above are extensively used in corresponding fields of research and they even cover some sub-tasks in MSCTD.",
"However, our MSCTD is different from them in terms of both complexity and quantity.",
"Firstly, multimodal machine translation datasets and textual chat translation datasets are either in multimodal or textual dialogue, while ours includes both.",
"It is obvious that conducting multimodal machine translation in conversations is more challenging due to the more complex scene.",
"Furthermore, MSCTD covers four language directions and contains more than 17k human-annotated utterances-image triplets, which is more than the sum of the annotated ones in Multi30K, BSD-AMI-ON, BconTrasT, and BMELD.",
"Tab.",
"2 provides information on the number of available modality, dialogues, and their constituent utterances for all the five datasets.",
"What is more, our MSCTD is also annotated with sentiment labels while they are not.",
"Secondly, compared with two existing multimodal dialogue sentiment analysis datasets, MSCTD's quantity of English version is nearly ten-times of the annotated utterances in MEISD or MELD.",
"More importantly, our MSCTD provides an equivalent Chinese multimodal dialogue senti-2605 Dataset # Dialogues # Utterances Train Valid Test Train Valid Test MELD (Poria et al., 2019) 1,039 114 280 9,989 1,109 2,610 MEISD (Firdaus et al., 2020) 702 93 205 14,040 1,860 4,100 MSCTD-Zh (Chinese version) 13,749 504 509 132,741 5,063 5,067 MSCTD-En (English version) 13,749 504 509 132,741 5,063 5,067 MSCTD-De (German version) 2,066 504 509 20,240 5,063 5,067 Table 3: Comparisons of four multimodal dialogue sentiment analysis datasets: MELD, MEISD, and our MSCTD on two languages.",
"ment analysis dataset and a relatively small German counterpart.",
"Tab.",
"3 shows the comparison for all the five datasets, i.e. , MELD, MEISD, and our MSCTD on three languages.",
"Following previous work (Wang et al., 2018; Ive et al., 2019; Meng et al., 2021), we focus on two types of image representation, namely the coarse-grained spatial visual feature maps and the fine-grained object-based visual features.",
"Coarse-grained Spatial Visual (CSV) Features.",
"We use the ResNet-50 model (He et al., 2016) pre-trained on ImageNet (Deng et al., 2009) to extract a high-dimensional feature vector f j R d c for image Z j .",
"These features contain output activations for various filters while preserving spatial information.",
"We refer to models that use such features as CSV .",
"Fine-grained Object-based Visual (FOV) Features.",
"Since using coarse-grained image features may be insufficient to model fine-grained visual elements in images including the specific locations, objects, and facial expressions, we use a bag-of-objects representation where the objects are obtained using an off-shelf Faster R-CNNs (Ren et al., 2015) pre-trained on Visual Genome (Krishna et al., 2017).",
"Specifically, for an input image Z j , we obtain a set of detected objects from Faster R-CNNs, i.e. , O j = { o j, 1 , o j, 2 , o j, 3 , ..., o j,m } , where m is the number of extracted objects and o j, R d f .",
"Each object is captured by a dense feature representation, which can be mapped back to a bounding box / region ( i.e. , Region-of-Interest (ROI)).",
"We refer to models that use such features as FOV .",
"Both types of features have been used in various vision and language tasks such as multimodal dialogue sentiment analysis (Firdaus et al., 2020), image captioning (Xu et al., 2015; Shi et al., 2021), and multimodal machine translation (Ive et al., 2019; Lin et al., 2020; Su et al., 2021).",
"To provide convincing benchmarks for the MSCTD, we perform experiments with multiple Transformer-based (Vaswani et al., 2017) models for the multimodal chat translation task.",
"Additionally, we provide several baselines for the multimodal dialogue sentiment analysis task.",
"According to different visual features, we divide the baselines into three categories: text only ( T ), text plus coarse visual features ( T + CSV ), and text plus fine-grained visual features ( T + FOV ).",
"T : Trans.",
"(Vaswani et al., 2017): the standard transformer model, which is a sentence-level neural machine translation (NMT) model (Yan et al., 2020; Meng and Zhang, 2019; Zhang et al., 2019), i.e. , regardless of the dialogue history.",
"TCT (Ma et al., 2020): A unified document-level NMT model based on Transformer by sharing the first encoder layer to incorporate the dialogue history, which is used as the Textual Chat Translation (TCT) model by (Liang et al., 2021c).",
"CA-TCT (Liang et al., 2021c): A multi-task learning model that uses several auxiliary tasks to help model generate coherence-aware translations.",
"T+CSV :",
"Trans.+Emb (Vaswani et al., 2017): it concatenates the image feature to the word embedding and then trains the sentence-level NMT model.",
"Trans.+Sum (Ive et al., 2019): it adds the projected image feature to each position of the encoder output.",
"Trans.+Att (Ive et al., 2019): this model utilizes an additional cross-attention sublayer to attend the image features in each decoder block.",
"MCT: we implement the multimodal self-attention (Yao and Wan, 2020) in the encoder to incorporate the image features into the chat translation model.",
"CA-MCT: similarly, we incorporate image features into the multitask-based chat translation model (Liang et al., 2021c) by the multimodal self-attention.",
"T+FOV :",
"Trans.+Con (Vaswani et al., 2017): it concatenates the word sequence to the extracted object sequence and thus obtains a new sequence taken as the input of the sentence-level NMT model.",
"Trans.+Obj (Ive et al., 2019): it is a translate-and-refine model (two-stage decoder) where the images are only used by a second-pass decoder.",
"M-Trans.",
"(Yao and Wan, 2020): it leverages a multimodal self-attention layer to encode multimodal information where the hidden representation of im-2606 Modality M# Model Chinese English English Chinese German English English German BLEU METEOR TER BLEU METEOR TER BLEU METEOR TER BLEU METEOR TER T M1 Trans.",
"ages are induced from the text under the guidance of image-aware attention.",
"MCT: here, we incorporate the object-level features into the model instead of coarse one.",
"CA-MCT: similarly, we incorporate the object-level features into the multi-task learning model.",
"We perform several experiments with different models.",
"text-CNN (Kim, 2014): it only applies CNNs to extract textual information for each utterance in a dialogue.",
"In this approach, we do not use the dialogue history or the additional visual information.",
"DialogueRNN (Majumder et al., 2019): this baseline is a powerful approach for capturing dialogue history with effective mechanisms for sentiment analysis.",
"DialogueRNN + BERT (Fir-daus et al., 2020): this model improves the performance of DialogueRNN by using BERT (Devlin et al., 2019) embeddings instead of Glove (Pen-nington et al., 2014) embeddings to represent the textual features.",
"DialogueRNN + PLM: we propose a stronger baseline built upon the DialogueRNN for sentiment analysis.",
"Specifically, we utilize RoBERTa (Liu et al., 2019) embeddings for English sentiment analysis, and ERNIE (Sun et al., 2019) embeddings for Chinese sentiment analysis, and XLM-R (Conneau et al., 2020) embeddings for German sentiment analysis.",
"For multimodal chat translation, we utilize the standard Transformer-Base architecture (Vaswani et al., 2017).",
"Generally, we use the settings described in previous work (Ive et al., 2019; Yao and Wan, 2020; Liang et al., 2021c) to conduct experiments on our MSCTD.",
"For multimodal dialogue sentiment analysis, we mainly follow the settings of previous work (Poria et al., 2019; Firdaus et al., 2020).",
"Please refer to Appendix A for more details.",
"For multimodal chat translation, following previous work (Liang et al., 2021c; Ive et al., 2019), we use the SacreBLEU 6 (Post, 2018), METEOR (Denkowski and Lavie, 2014) and TER (Snover et al., 2006) with the statistical significance test (Koehn, 2004) for fair comparison.",
"Specifically, for Chinese English, we report case-insensitive score.",
"For English Chinese, the reported score is at the character level.",
"For English German, we report case-sensitive BLEU score.",
"For multimodal dialogue sentiment analysis, following Poria et al. (2019), we report weighted-average F-score.",
"Results on English Chinese.",
"(1) Among all only text-based models (M1 M3), we find that M1 6 BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a+ version.1.4.13 2607 Model Chinese English BLEU METEOR TER Transformer ( T ) 20.43 24.06 61.00 TCT ( T ) 20.81 24.45 61.19 CA-TCT ( T ) 21.23 24.82 60.75 MCT ( T+CSV ) 22.25 25.60 59.69 CA-MCT ( T+CSV ) 22.68 25.60 59.14 Table 5: Sentiment-aware translation results using ground truth.",
"performs worse than M2, showing that the dialogue history indeed is beneficial for better translations.",
"Furthermore, M3 can further improve the translation performance, which suggests that modeling the coherence characteristic in conversations is crucial for higher results.",
"These can also be found in other settings ( e.g. , M7 vs. M4 6; M8 vs. M7).",
"(2) The models with image features incorporated get higher results than corresponding text-based models ( i.e. , M4 M6&M9 vs. M1; M7&M12 vs. M2; M8&M13 vs. M3).",
"(3) The dialogue history and the image features obtain significant cumulative benefits (M8 vs. M1 and M13 vs. M1) (4) Among these image-based models (M4 M8 or M9 M13), we observe that different fusion manners of text and image features reflect great difference on effects.",
"It shows that there is much room for further improvement using other more advanced fusion methods.",
"(5) Using FOV image features is generally better than the coarse counterpart CSV (M9 M13 vs. M4 M8), which demonstrates that the fine-grained object elements may offer more specific and effective information for better translations.",
"Results on English German.",
"Similar findings are found on English German.",
"This shows that our conclusions are solid and convincing on general datasets.",
"All these results prove the value of our constructed MSCTD.",
"Furthermore, we provide some stronger baselines that we firstly train the model on the general-domain corpus and then fine-tune it on our chat translation dataset.",
"The results are presented in Table Tab.",
"8 of Appendix B, which show similar findings observed in Table Tab.",
"4.",
"To evaluate the effect of sentiment, we conduct some experiments on several baselines including single-modality ones and double-modality ones.",
"In Model Chinese English BLEU METEOR TER ACC.",
"terms of implementation, following Si et al. (2019), we append the sentiment label to the head of the source utterance.",
"Tab.",
"5 shows the results.",
"Comparing them with the results (M1 M3 and M7 M8) without using the sentiment in Tab.",
"4, we find that using the ground-truth sentiment label has a positive impact on the translation performance.",
"Therefore, we believe that it is a topic worthy of research in the future.",
"We also conducted the experiments with automatically predicted sentiment labels rather than the gold ones as the reviewer suggested, where we used the mixed sentiment presentation by dot-multiplying the predicted sentiment distribution and the sentiment label representation.",
"The results are shown in Tab.",
"6, where we find that the sentiment factor, as the inherent property of conversations, indeed has a positive impact on translation performance.",
"We also observe that using the automatically predicted sentiment labels (actually the mixed sentiment representation) shows slightly lower results than using ground truth in terms of three metrics.",
"The reason may be that the mixed sentiment representation has certain fault tolerance.",
"Results on MSCTD-Zh.",
"We can see that the text-based models perform much poorer than other multimodal systems, which shows that it is not enough to evaluate the sentiment based only on the text.",
"It indicates that visual information and contextual embeddings are crucial for classifying sentiment polarities.",
"Overall, we achieve weighted F1 score of 67.57% with the DialogueRNN+ERNIE model.",
"Chinese.",
"These show that it is beneficial to introduce the visual information and contextual embeddings into the multimodal dialogue sentiment analysis task for different languages.",
"Overall, we achieve the best F1 score of 66.45% and 54.46% on English and German, respectively.",
"On this task, we obtain consistent results with previous work (Poria et al., 2019; Firdaus et al., 2020), which suggests the utility and reliability of our MSCTD.",
"Additionally, MSCTD-Zh and MSCTD-De bridge the gap on multimodal dialogue sentiment analysis of Chinese and German.",
"In this paper, we introduce a new multimodal machine translation task in conversations.",
"Then, we construct a multimodal sentiment chat translation dataset named MSCTD.",
"Finally, we establish multiple baseline systems and demonstrate the importance of dialogue history and multimodal information for MCT task.",
"Additionally, we conduct multimodal dialogue sentiment analysis task on three languages of the MSCTD to show its added value.",
"MCT is a challenging task due to the complex scene in the MSCTD, leaving much room for further improvements.",
"This work mainly focuses on introducing the new task and dataset, and we provide multiple models to benchmark the task.",
"In the future, the following issues may be worth exploring to promote the performance of MCT: How to effectively perceive and understand the visual scenes to better assist multimodal machine translation in conversations?",
"How to build a multimodal conversation representation model to effectively align, interact, and fuse the information of two modalities?",
"In this section, we discuss the main ethical considerations of MSCTD: (1) Intellectual property protection.",
"The English utterance and image of MSCTD is from OpenViDial dataset (Meng et al., 2021).",
"For our translation and sentiments, its permissions are granted to copy, distribute and modify the contents under the terms of the Creative Commons AttributionShareAlike 3.0 Unported License and Creative Commons CC0 License, respectively.",
"(2) Privacy.",
"The data source are publicly available movies.",
"Its collection and Chinese/German annotation procedure is designed for chat translation purpose, and does not involve privacy issues.",
"(3) Compensation.",
"During the sentiment annotation, Chinese and German translation, the salary for annotating each utterance is determined by the average time of annotation and local labor compensation standard.",
"(4) Data characteristics.",
"We refer readers to the content and Meng et al. (2021) for more detailed characteristics.",
"(5) Potential problems.",
"While principled measures are taken to ensure the quality of the dataset, there might still be potential problems with the dataset quality, which may lead to incorrect translations in applications.",
"However, moderate noise is common in large-scale modern translators, even for human translated sentences, which should not cause serious issues.",
"This work is supported by the National Key R&D Program of China (2020AAA0108001) and the National Nature Science Foundation of China (No. 61976015, 61976016, 61876198 and 61370130).",
"Liang is supported by 2021 Tencent Rhino-Bird Research Elite Training Program.",
"The authors would like to thank the anonymous reviewers for their valuable comments to improve this paper."
] | [
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"In this digital age, online users expect personalized content.",
"To cater to diverse group of audiences across online platforms it is necessary to generate multiple variants of same content with differing degree of characteristics (senti-ment, style, formality, etc.).",
"Though text-style transfer is a well explored related area, it focuses on flipping the style attribute polarity instead of regulating a fine-grained attribute transfer.",
"In this paper we propose a hierarchical architecture for finer control over the attribute, preserving content using attribute disentanglement.",
"We demonstrate the effectiveness of the generative process for two different attributes with varied complexity, namely sentiment and formality.",
"With extensive experiments and human evaluation on five real-world datasets, we show that the framework can generate natural looking sentences with finer degree of control of intensity of a given attribute.",
"The ubiquity of online social networks and world wide web has brought in diverse and often conflict-ing groups of users consuming similar information but from different perspectives.",
"So the onus falls on the content producer to cater customized content based on the users' profile.",
"Consider an example related to a Spanish football (soccer) league.",
"Say the news is Barcelona has defeated Real Madrid.",
"This news needs to be presented in different tones to a Barcelona Fan Barcelona smashed Real-Madrid, a Real-Madrid Fan Real Madrid lost the epic battle and a (say) Villarreal Fan Barcelona wins three points against Real-Madrid.",
"Automatic generation of content with fine regulation of attributes like sentiment and style is extremely beneficial in this context.",
"There are several related works in similar space of text-style-transfer techniques (Hu et al., 2017; Logeswaran et al., 2018; Shen et al., 2017; Singh and Palod, 2018) which attempt to switch polarity of a text from, e.g., formal to casual, or positive to negative sentiment.",
"However, none of the work focuses on more involved problem of fine-grained regulation of attributes to generate multiple variants of a sentence.",
"Several of the existing style-transfer methods (Fu et al., 2018; John et al., 2018) convert a continuous entangled generative representation space obtained using variational auto-encoder (Bowman et al., 2015) into disentangled attribute and content space.",
"It facilitates attribute polarity switch by perturbing attribute representation without interfering with context.",
"However, a disentangled generative representation may result in a loss of information about complex inter-dependency of content and attributes otherwise captured in an unmodified entangled generative space.",
"Hence, trivial extension of the variational inference (encoding) mechanism for finer attribute control by allowing incremental perturbation of the attribute representation in the disentangled generative space often leads to generation of not-so-natural' sentence mostly unrelated to the original content.",
"More specifically, there are two design challenges which need to be tackled to achieve fine grained attribute control",
"(a) smooth regulation of attributes via disentangled attribute space perturbation and",
"(b) natural sentence generation preserving the content.",
"This paper builds up a layered VAE to tackle these problems simultaneously.",
"Specifically, we propose the model Control Text VAE ( CTVAE ), that transforms a derived representation of entangled and enriched text embedding (obtained using the BERT encoder) into a disentangled representation of attribute and context using a transformation module followed by a factored prior imposition to ensure independence between context and attribute dimensions.",
"Further using attribute supervision on the dimension designated for a given attribute, we establish a correlation between the continuous representation to the discrete attribute value facilitating smooth interpolation as intended in",
"(a).",
"It preserves both the disentangled and entangled representations in different hierarchy of inference module.",
"Designing the transformation network as reversible, it restores the original entangled sentence representation which is our generative space, from the disentangled space to achieve",
"(b).",
"We demonstrate the effectiveness of CTVAE to generate controlled text by fine tuning two different attributes namely sentiment and formality.",
"Using five publicly available datasets, we show that CTVAE improves the performance significantly over previous controlled text generative models while performing content preserving style transfer and fine tuning of the target attribute.",
"With human evaluation on generated sentences, for three different metrics meaning preservation, degree of target attribute transfer and naturalness we show that CTVAE can generate attribute regulated content preserving natural sentences.",
"1 2 Related Work Unlike style-transfer, fine grained attribute regulated text generation is less explored yet extremely necessary.",
"State-of-the-art methods for style transfer are categorized as supervised and unsupervised techniques.",
"If parallel examples are available for any attribute, i.e., training data consisting of original and corresponding attribute flipped sentences, then supervised techniques (Bahdanau et al., 2014; Vaswani et al., 2017) could be used to perform style transfer.",
"The papers (Xu et al., 2012; Jham-tani et al., 2017; Rao and Tetreault, 2018) introduced parallel corpora consisting of formal and corresponding informal sentences and showed that coarse-grained formality transfer is possible and benchmarked various neural frameworks for the same.",
"Generating parallel training corpus for fine grained attribute transfer is expensive and impractical as for one sentence we need to generate multiple style transferred text bearing fine-grained attribute.",
"Some recent works focus on semi-supervised approaches incorporating attribute informations with non-parallel datasets.",
"These techniques mainly focus on disentangling the attribute and content representation in the latent space (Fu et al., 2018; John et al., 2018; Logeswaran et al., 2018; Shen et al., 1 https://github.com/bidishasamantakgp/ CTVAE 2017; Singh and Palod, 2018) by using different encoding modules along with feature supervision.",
"A recent work (John et al., 2018) uses adversarial setup in a multitasking setting to achieve attribute representation independent of the content.",
"As this work disentangles context and attribute in multidimensional spaces it limits interpolation of the attribute space to desired degree.",
"Moreover, the disentangled generative space causes loss in important context.",
"Similarly, the paper (Hu et al., 2017) uses attribute information as a structured or one-hot vector, which is not continuous restricting interpolation.",
"They replace the attribute representation to a desired value (corresponding to opposite polarity) and generate sentences from this disentangled space.",
"However, a naive extension for fine grained control by perturbing the attribute space by a small amount is difficult as the representation is multidimensional moreover, leads to unnatural, poorly readable sentence.",
"From a different perspective, a recent work (He et al., 2020) proposed an unsupervised framework to achieve style transfer.",
"They propose a generative probabilistic model that assumes non-parallel corpus as partially observed parallel corpus.",
"They do not infer posterior distribution of the observed data, hence fine grained attribute transfer is difficult.",
"As the extensions of current style transfer methods are non-trivial, a recent work (Wang et al., 2019) has proposed fine grained sentiment regulation keeping the content intact.",
"It gradually updates the entangled latent representation using costly fast-gradient-iterative modification until it can generate a sentence entailing target attribute from that .",
"However, overemphasis on content preservation often results in generation of the original unmodified sentence followed by new phrases bearing target attribute.",
"This is not ideal to extend them for more difficult attributes like casual to formal transformation.",
"Understanding the criticality of fine grained attribute transfer, we propose a new framework towards this direction, which does not only facilitate fine-grained control even for complex attributes, but is also able to mitigate the existing problems of disentangled generative space.",
"We propose a hierarchical model using Variational Autoencoders (Kingma and Welling, 2013) to achieve fine grained control over attribute space while maintaining the quality of the generated sen-",
"We consider an input set X = { x 0 , , x M 1 } of M observed sentences sampled from some underlying unknown data distribution p D .",
"Along with the sentences, we observe ground truth attribute, F = { f 0 , , f M 1 } where f i is associated to sentence x i .",
"For ease of reference, we will henceforth denote a training instance x i and f i by x and f respectively.",
"Detailed architectural overview of CTVAE is depicted in Figure 1, which can be divided into two modules consisting of a hierarchical encoder and a corresponding hierarchical decoder.",
"We start by describing the inference model (en-coder) followed by the generation model (decoder).",
"The inference model is designed as a bottom-up hierarchical encoder with two distinct layers for modelling word sequence representation z s , and feature representation z f .",
"We model an enriched sentence representation z s R d with latent dimension size d from word sequence x as follows.",
"We first obtain the contextual word embeddings for each word w in x from the BERT pre-trained model (Turc et al., 2019).",
"Then, we generate an aggregated encoding E s by taking an average of them.",
"Finally, we transform it into a continuous d dimensional Gaussian space using a fully connected neural network g by the following two steps.",
"The sentence representation z s is sampled from this posterior distribution q ( z s | x ) .",
"It is an entangled complex manifold of different salient features present in multiple dimensions.",
"This enriched representation is the generative representation as we decode sentences from z s for better quality.",
"Next, we transform the sentence representation z s into another representation z f on which we impose disentanglement constraints followed by attribute supervision such that z f could be decomposed into independent space of context and attribute.",
"We need an efficient transformation to maintain the inherent dependencies between the context and attribute during this process.",
"Also it is important to restore enriched z s from decomposed z f i.e. to capture the reverse dependency.",
"Instead of modeling two different transformation networks to capture the dependency in both ways, we design a single reversible transformation module.",
"It guarantees that given a z f , we getback an appropriate entangled z s useful for natural sentence generation.",
"Hence, we build our transformation network extending R-NVP (Dinh et al., 2016) which is a reversible auto-regressive normalizing flow to achieve mentioned interdependency and inversion.",
"Specifically, we split z s into two parts.",
"The first d 1 dimensions of the z s is dedicated to model latent factors important for context modelling.",
"The rest of the (last) dimension is used to derive a representation for the specified attribute.",
"The detailed interconnection between them in one transformation step is depicted in Figure 1(B).",
"We obtain z f by T transformation steps, where T is a hyper parameter.",
"In a transformation step t we obtain a representation distribution q t ( z t | z t 1 ) , which is characterized as the ordered set of following operations: [ t 1 , t 1 ] = 1 t ( z t 1(1: d 1) ) (3) z t ( d ) = z t 1( d ) t 1 + t 1 (4) [ t 2 , t 2 ] = 2 t ( z t ( d ) ) (5) z t (1: d 1) = z t 1(1: d 1)",
". t 2 + t 2 , (6) The Eq.",
"(4) describes intuitively that the attribute representation field is dependent on first d 1 dimensions or context.",
"The Eq.",
"(6) encodes how context is influenced by the attribute.",
"Here, 1 t and 2 t are designed as multilayer fully connected feed-forward networks which are not invertible.",
"However, a careful inspection of Eqs.",
"(4) and (6) reveals that given a z t , the input z t 1 can be fully recovered.",
"We provide the reverse transformations in the next subsection.",
"Thus, we can get q ( z f | z s ) := q ( z T | z s ) and we assign z f := z T .",
"We pick the d th (last) dimension of z f to model specified attribute representation z a .",
"To facilitate smooth interpolation in this attribute space, we keep z a as unidimensional.",
"We further use attribute supervision to establish the correlation with categorical values of the attribute.",
"We will discuss the process in the next subsection.",
"The rest of the dimensions of z f are kept for other contextual features z u .",
"We discuss about disentanglement of z f in Sec. 3.4.",
"The overall posterior distribution achieved by the hierarchical inference mechanism: q ( z | x ) = q ( z s | x ) (cid:124) (cid:123)(cid:122) (cid:125) Entangled q ( z f | z s ) (cid:124) (cid:123)(cid:122) (cid:125) Disentangled (7) 3.3 Generative model We design our generative model p using a top-down hierarchy, with two different variables z s and z f .",
"The overall distribution of the latent variables for the generation is defined as: p ( z ) = p ( z f ) (cid:124) (cid:123)(cid:122) (cid:125) Disentangled p ( z s | z f ) (cid:124) (cid:123)(cid:122) (cid:125) Entangled (8) Here p ( z f ) is a factored prior of the feature representation z f , which can be expressed as p ( z f ) = (cid:81) di =1 p ( z if ) .",
"We use a standard normal distribution, which is a factored isotropic distribution, as prior, i.e., p ( z f ) = N (0 , I ) .",
"Imposing this factored prior enforces disentanglement (Kim and Mnih, 2018) on the derived space q ( z f | z s ) .",
"As discussed in the previous section, we have designated the last dimension of the z f to capture any attribute of interest, and remaining dimensions for other contextual features.",
"Henceforth, attribute representation prior can be sampled from p ( z df ) and other contextual features prior representations can be sampled from (cid:81) d 1 i =1 p ( z if ) .",
"We use feature supervision on z a to increase the correlation between the representation and the attribute value as follows.",
"Given z a , we decode the categorical attribute value of the given sentence x and back propagate the loss of prediction to modify the network parameters.",
"More specifically, the decoding distribution for the ground truth attribute is p ( f | z a ) = Categorical ( ( z a )) (9) Here is a scaling network to convert the singular value z a into a logit vector corresponding to categorical values of ground-truth attribute.",
"Next, the network tries to decode the entangled distribution z s from the disentangled distribution z f .",
"We apply the reverse transformation flow to recover z s using T inverse transformations.",
"Starting from z f ( z T ), we recover z s by reverse transformation steps p t ( z t 1 | z t ) , as a set of ordered operations: [ t 2 , t 2 ] = 2 t ( z t ( d ) ) (10) z t 1(1: d 1) = z t (1: d 1) t 2 t 2 , (11) [ t 1 , t 1 ] = 1 t ( z t 1(1: d 1) ) (12) z t 1( d ) = z t ( d ) t 1 t 1 (13) The Eq.",
"(11) is the reverse transformation corresponding to the Eq.",
"(6).",
"Similarly Eq.",
"(13) de-fines the reverse flow of Eq.",
"(4).",
"It may be noted that 1 t , 2 t and 1 t , 2 t are derived from the same neural network 1 t , 2 t as Eqs.",
"(3), (5).",
"Hence, given a z t we can easily get back z t 1 without any loss of information.",
"Thus we get z s := z 1 .",
"Following the density estimation theory (Dinh et al., 2016), the log probability density of p ( z s | z f ) , i.e., log p T ( z s | z f ) denoted as: log p ( z f ) T (cid:88) t =1 log det d f t d f t 1 (14) where f t denotes transformation function at step t described in Eqs.",
"(3)(6).",
"Finally, with the decoded z s , we sample the word sequence x ( j ) using a recurrent unit as follows: x ( j ) Softmax ( m ( h ( j ))) (15) here h ( j ) = r ( x ( j 1) , z s ) is the hidden state of gated recurrent unit r which takes the previously generated token x ( j 1) and the sentence representation z s .",
"Then we pass this hidden state information to a feedforward network m to generate logits.",
"Subsequently, we sample words based on the softmax distribution of the generated logits.",
"The joint likelihood of the sentence, features, and the latent variables p ( x , f , z s , z f ) : = p ( x | z s ) p ( f | z a ) p ( z s | z f ) p ( z f ) (16) 3.4 Training We can learn the model parameters by optimizing the joint likelihood given in",
"Eq.(16).",
"To learn the complex transformation of disentangled attribute and context in z f from entangled z s precisely, we need to first estimate the approximate posterior q ( z s | x ) accurately.",
"However, in the initial iterations of training the encoder fails to approximate the posterior distribution (He et al., 2019).",
"Hence, we first train the lower layer by maximizing ELBO (Kingma and Welling, 2013) : E q ( z s | x ) log p ( x | z s ) KL ( q ( z s | x ) || p ( z s | z f )) (17) This is an unsupervised training as we are not using any attribute information and this objective helps to update encoder parameters to generate entangled z s .",
"Once the lower layer is trained, we update the transformation parameters",
"(Eq.(14)) and impose feature supervision by maximizing the marginal likelihood of z f given below: E q ( z f | z s ) (cid:104) log p ( f | z a ) + log p ( z f ) (18) T (cid:88) t =1 log det d f t d f t 1 (cid:105) KL ( q ( z f | z s ) || p ( z f )) where and are regularizing parameters to enforce disentanglement of z f and emphasize on attribute supervision respectively.",
"If we breakdown the KL term of the above objective function as E z q ( z s ) I ( z s , z f ) + KL ( q ( z f ) || p ( z f )) , we get total correlation loss KL ( q ( z f ) || p ( z f )) , minimizing which the model achieves disentanglement on z f along the dimensions (Higgins et al., 2017).",
"Also, the mutual information I ( f, z a ) between specified attribute and z a can be computed using entropy function H ( . ) as H ( f ) H ( f | z a ) Attribute Dataset # sentences Avg.",
"E x p D [ E q ( z s | x ) q ( z a | z s ) log p ( f | z a )] , is lower bounded by the likelihood p ( f | z a ) , hence, we emphasise on the likelihood term in the objective function using to maintain higher correlation between z a and f .",
"Thus we update the network parameters phase by phase using",
"Eqs.(17) and(18).",
"We broadly looked into two evaluation criteria to compare the performance of different generative models",
"(a) Attribute control: efficiency in generating sentences entailing target attribute of interest",
"(b) Fine-grained transfer: efficiency of content preserving fine-grained attribute regulated text generation.",
"In this section we discuss datasets, baselines followed by the performance across datasets.",
"We focused on two attributes of varied complexity, namely,",
"(a) sentiment and",
"(b) formality.",
"In Table 1 we describe the datasets in detail.",
"For sentiment we include two review datasets and one hate-speech dataset.",
"The Gab dataset is designed for counter-hatespeech learning and every hateful sentence has a candidate counter hate-speech.",
"We consider them as non-hateful (NH) class of content.",
"Thus we have training examples with hateful (H) and non-hateful (NH) contents.",
"The formality datasets have formal (F) and corresponding casual (C) instances.",
"We report all the results on the test data provided.",
"We compare CTVAE performance with semi-supervised method -",
"(a) ctrlGen (Hu et al., 2017), supervised method",
"-(b) DAE (John et al., 2018) that focus on text-style-transfer using disentanglement, and unsupervised method",
"(c) ProbStyle-Transfer (He et al., 2020).",
"We also compare with",
"(d) entangleGen (Wang et al., 2019) which focuses on fine-grained style transfer using entangled representation.",
"Apart from these state-of-the-art baselines, we inspect",
"(e) CTVAE-NR ( CTVAE N on-R eversible transformation) where we replace the invertible transformations of CTVAE with two separate transformation networks responsible to capture q ( z f | z s ) and p ( z s | z f ) .",
"For different evaluation Sentiment Formality Yelp Amazon GAB Music Family Methods Controlgen.",
"Experimental setup: We estimate the average representation value of z a corresponding to each categorical (binary) value for an attribute of interest as z max and z min from training data.",
"We generate attribute controlled sentences in two ways.",
"First we sample a generative representation vector from the prior distribution (i.e., p ( z s | z f N (0 , I )) and assign either z max or z min to z a .",
"We sample 10 sentences from a representation and select the one which bears the target attribute.",
"If there is no such sample generated we consider it as a failure case.",
"Similarly, we assign z max or z min to z a depending on the target attribute to posterior representation of a given sentence x .",
"We sample 10 sentences from that and select the one most similar with x ( BERT embeddings having cosine similarity greater than = 0 . 71 ) and entails the target attribute.",
"If we fail to find any candidate following both the criteria we consider that a miss.",
"We identify the generated sentences with target attribute using a classifier build by extending BERT and train on different datasets.",
"We investigate multiple cosine similarity thresholds (0.65 to 0.75 with granularity 0.01).",
"We observe the generated sentences having cosine similarity with original sentence less than 0.7, don't contain important context words.",
"On contrary, we observe all methods except CTVAE and entan-gledGen were able to generate only a very small number of candidates with high similarity scores ( > 0.73).",
"To provide a fair comparison we keep at 0.71 for all datasets across all methods.",
"Metrics: We report controlled generation accuracy , i.e., percentage of generated sentences from prior bearing target attribute and style inversion accuracy , i.e., the percentage of generated sentences from posterior bearing target attribute and related content.",
"We also report percentages of related content generation for style inversion.",
"We report mean performance of each model trained with three random initialization.",
"Baselines: We report ctrlGen and DAE for both metrics as they can sample generative representation from both prior and posterior.",
"Whereas entangleGen and probTrans can only generate sentences corresponding to a given posterior, we compare them only for style inversion .",
"We report controlled generation accuracy and style inversion accuracy for Yelp , Amazon and GAB in Table 2. It can be observed that CTVAE outperforms",
"outperforms all competing methods across three datasets for controlled generation .",
"The superior performance of CTVAE stems from the fact that attribute supervision on disentangled representation helps to achieve better control of attributes than the semi supervised ctrlGen .",
"DAE which is also an attribute supervised technique performs exactly same like ours.",
"CTVAE effectively generates more related content than others and achieves best accuracy for style inversion in Amazon and both hateful to non-hateful (H-NH) and non-hateful to hateful (NH-H) transitions for GAB .",
"It is the second best in Yelp .",
"DAE , along with ctrlGen , uses disentangled generative space which often causes content information loss.",
"Hence, they generate less related content with respect to other methods which leads to a drop in accuracy for style inversion .",
"entangleGen performs best for style inversion for Yelp and second best in other datasets.",
"It achieves relatively low accuracy even after producing larger amount of related content.",
"It uses BERT embedding space to search for a candidate embedding closest to the original sentence for style inversion.",
"As Yelp contains shorter coherent sentences it is easy to find related yet opposite polarity sentence embedding whereas for GAB the H and NH sets are quite different and their representation spaces are far from each other causing poor performance.",
"The unsupervised method probTrans performs well in relatively simpler dataset Yelp and Amazon however, fails to generate related content for complex GAB entangleGen AP entangleGen R ctrlGen AP ctrlGen RCTVAEAPCTVAERf 4 (Negative)f 3 f 2 f 1 f 1 f 2 f 3 f 4 (Positive) F 0.10 0.15 0.20 0.25 0.30 0.35 R 0 .",
"and scores the lowest.",
"As converting a counter-hatespeech to hateful content is difficult, all methods perform poorly.",
"The performance of CTVAE-NR is significantly inferior compared to CTVAE .",
"Close inspection reveals that even though at training we achieve very low KL between q ( z f | z s ) and p ( z s | z f ) , the decoded z s is not exactly the same as the encoded distribution.",
"Thus, it performs poorly in style inversion .",
"From the Table 2, we can see that CTVAE performs best in both Music and F amily datasets for all metrics.",
"Conversion of a casual sentence into formal (C-F) is more difficult as it would require some structural change of the sentence, whereas the reverse transformation (F-C) is easy.",
"Though the disentangled based methods perform better for C-F relatively than F-C conversion, overall they perform poorly as they are unable to generate related content after perturbing disentangled generative space for the same.",
"entangleGen also performs poorly in both the datasets for both C-F and F-C.",
"As a pair of formal and corresponding informal sentences have very high content overlap, only structure, capitalization etc are different, in the BERT representation space they become very close.",
"The generative model for entangleGen generates sentences from this representation space, hence it cannot distinguish much on smaller change of representation.",
"It confuses the generative model and it generates the original sentence as it is very often.",
"Unlike GAB , probTrans performs better than all semi-supervised methods along with entangleGen even though formality is a difficult attribute like hatred.",
"As the formality datasets are parallel data, probTrans can accurately estimate the latent variables for them which otherwise is difficult.",
"Hence, they learn to successfully generate style inverted text given parallel sentence.",
"We perform student t-test with significance level 0.05 and report expected p-values with closest baseline following Reimers et al. (Reimers and Gurevych, 2018) for two tasks i.e controlled generation and style inversion .",
"For controlled generation we find the p-values per dataset as follows.",
"For Yelp the p-value is 0.009 compared against ctrlGen , for Amazon 0.019 with respect to ctrlGen , GAB 0.015 with ctrlGen , Music 0.012 against DAE and for Family the p-value is 0.008 compared with DAE .",
"In first three datasets, DAE and CTVAE performs exactly same.",
"Similarly, for style transfer we obtain the p-values as follows.",
"For Amazon it is 0.028 in comparison to entangleGen , in GAB for (H-NHS) we get 0.028 compared against entangleGen and for (NHS-HS) it is 0.032 in comparison to ctrlGen .",
"Music (C-F) yields 0.002 and (F-C) yields 0.017 with probTrans , for Family (C-F) for 0.024 against ctrlGen and for (F-C) 0.030 compared against probTrans .",
"Experimental Setup: We evaluate the performance of fine grained attribute control as follows.",
"We create a set with n equidistant values between z min to zero denoted as { f i } and another n values between zero to z max denoted as { f i } .",
"The entangleGen ctrlGen CTVAE F Original sentence: every encounter i have had with her ... she is always rude or angry .",
"union set F represents attribute control grades.",
"Greater indices indicate higher perturbation in the attribute representation space and the sign denotes the direction.",
"Given a posterior representation z f of a sentence x , we assign z a to a value from F keeping z u fixed and decode a z s from that.",
"We generate 10 sentences from it and select the sentence whose BERT embedding is closest to the original sentence as well as bears target attribute value.",
"We repeat this for all values in F .",
"We consider equivalent set F with n values for entangleGen with different increasing modification weights w which they used for fine grained attribute control in the original paper and generate sentences corresponding to that.",
"Though ctrlGen does not support fine-grained transfer, we extended it by interpolating between two structured attribute representation vector [0,1] and [1, 0] and generating real valued vectors in F where each vector summed to one.",
"For each attribute representation vector, we generate sentences from them similar to CTVAE .",
"As, other models cannot be extended for the same, we do not compare their performance here.",
"Metrics: We report attribute polarity score AP which estimates degree of attribute polarity of a generated sentence and a relatedness score R capturing the relatedness with the original sentence.",
"For review datasets Yelp and Amazon , AP is obtained from a pre-trained Stanford regressor model (Socher et al., 2013) normalized between 0 (most negative) and 1 (most positive).",
"A pilot study on randomly picked 25 sentences shows that the pre-trained regression score is highly corelated (Spearman's rank correlation 0.68) with human judgements.",
"We report R as Jaccard overlap (Tustison and Gee, 2009) of unigrams between original and generated sentence excluding stop words for these datasets.",
"However, for other three datasets the correlation observed is low.",
"Hence, we resort to human evaluation via crowdflower platform 2 .",
"Given a test sentence, we generate n sentences corresponding to n different grades in the set F and ask three annotators to rank these sentences from 1 to n .",
"We get the average rank for this instance and repeat for all test sentences to obtain average ranks as AP corresponding to each of the n values.",
"We ask them to provide an absolute score for relatedness ( R ) of the generated sentences with respect to the original sentence in a scale of 1 to 10 , 1 being least related, we rescale it and present the result in the scale of 0 to 1. A coherent scheme would see monotonic change in value of AP with attribute control grades varying from f n to f n and the value of R staying close to one throughout.",
"We demonstrate the performance of generative models on one review dataset Yelp and hatespeech dataset GAB in Figure",
"2(a),",
"(b) respectively.",
"We show the variation of attribute polarity AP and relatedness score R with n = 4 .",
"We can observe that there is a smooth increase in AP as we move from f 1 to f 4 (denoting greater shift from original z a values towards z max ) while achieving consistently high R for CTVAE in both the datasets.",
"Similarly as we move from f 1 to f 4 CTVAE shows monotonic decrease in AP still achieving highest R .",
"Though a similar pattern is observed in ctrlGen in Yelp , it has extremely poor R score which denotes that it generates unrelated sentences in the process of fine-grained attribute regulation.",
"Moreover, it shows minimum variation in sentiment score thoughout the process.",
"In contrast, entan-2 www.appen.com gleGen achieves highest R score as they focus on content preservation, however, the sentiment score transition is uneven and doesn't follow the desired coherency.",
"ctrlGen shows minimum variation in sentiment score thoughout the process.",
"In contrast, CTVAE successfully maintains a balance for relatedness and attribute control.",
"It can be observed that CTVAE shows a monotonic transition as we move from left to right denoting higher degree of attribute representation change for Amazon while other methods show haphazard changes.",
"In GAB ctrlGen shows abrupt change in AP and lowest score for R which demonstrates very less control towards fine-tuned attribute regulation for hatred filtering.",
"Though entangleGen achieved lowest score in AP , signifying it can more accurately remove hateful content than CTVAE , the variation is not monotonic.",
"Further inspection reveals that entangleGen mostly generates counter hate-speech as BERT representation clusters H and NH for GAB locate in two distant spaces.",
"Hence, the relatedness R of the generated sentences is low.",
"In contrast, CTVAE successfully maintains a balance for relatedness and attribute control in both.",
"We experiment with n = 3 equidistant values in each direction in F and report the performance on Music and F amily dataset in Figure 2 (d,e).",
"It can be observed from the figure that all the methods received a similar AP score, around 2.0, for C-F transformation from f 1 to f 3 .",
"Also, as we move to right after f 1 , the changes in AP are inconsistent for CTVAE and entangleGen .",
"However CTVAE achieves relatively better formality score thoughout.",
"entangleGen achieves best R and low AP due to generation of original content verbatim very often.",
"ctrlGen shows lowest relatedness and achieves a transfer score AP = 1.5 on average, that is, overall it fails to generate formal sentences.",
"Moving towards casual transition, i.e., from f 1 to f 3 we observe a similar trend for CTVAE and entangleGen .",
"Though the variation with respect to attribute control grades in F is abrupt, we achieve the lowest AP , i.e., most informal sentences.",
"ctrlGen performs very poor with respect to all the methods.",
"for F amily there is no trend in AP found.",
"CTVAE maintains high R , whereas ctrlGen was able to achieve lowest relatedness score.",
"We also investigate the fluency of these methods across datasets reported in Table 4 and found that CTVAE produces very high percentage fluent sentences similar to entangleGen .",
"As we have observed, entangleGen tends to copy the content for formality datasets because the formal and casual sentences lie close in the representation space, the fluency is high.",
"Similarly for GAB dataset, as it tends to generate counter-hatespeech the fluency remains high.",
"Finally, Table 3 provides examples of fine grained sentiment and hatred regulated sentences generated by CTVAE , entangleGen , and ctrlGen .",
"We observe that entangleGen generally produces long sentences, sometimes copies the original content.",
"It produces same sentence multiple times.",
"On the other hand, ctrlGen mostly generates sentences hardly related with the original content.",
"In contrast, CTVAE can generate related sentences and provides finer attribute variation, controlled by f i .",
"The major contribution of this paper is to propose CTVAE which consists of a carefully designed hierarchical architecture facilitating disentangled representation to control attribute without affecting context as well as enriched entangled generative representation for meaningful sentence generation.",
"The invertible normalizing flow as a transformation module between the two representation of CTVAE enables learning of complex interdependency between attribute and context without the loss of information.",
"Such a design choice is key to achieving accurate fine tuning of attributes (be it sentiment or formality) while keeping the content intact.",
"This is a key achievement considering the diffi-culty of the problem and modest performance of state-of-the-art techniques.",
"Extensive experiments on real-world datasets emphatically establish the well-rounded performance of CTVAE and its superiority over the baselines."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"In this paper, we introduce ELECTRA-style tasks (Clark et al., 2020b) to cross-lingual language model pre-training.",
"Specifically, we present two pre-training tasks, namely multilingual replaced token detection, and translation replaced token detection.",
"Besides, we pretrain the model, named as XLM-E, on both multilingual and parallel corpora.",
"Our model outperforms the baseline models on various cross-lingual understanding tasks with much less computation cost.",
"Moreover, analysis shows that XLM-E tends to obtain better cross-lingual transferability.",
"It has become a de facto trend to use a pretrained language model (Devlin et al., 2019; Dong et al., 2019; Yang et al., 2019b; Bao et al., 2020) for downstream NLP tasks.",
"These models are typically pretrained with masked language modeling objectives, which learn to generate the masked tokens of an input sentence.",
"In addition to monolingual representations, the masked language modeling task is effective for learning cross-lingual representations.",
"By only using multilingual corpora, such pretrained models perform well on zero-shot cross-lingual transfer (Devlin et al., 2019; Conneau et al., 2020), i.e., fine-tuning with English training data while directly applying the model to other target languages.",
"The cross-lingual transferability can be further improved by introducing external pre-training tasks using parallel corpus, such as translation language modeling (Conneau and Lample, 2019), and cross-lingual contrast (Chi et al., 2021b).",
"However, previous cross-lingual pre-training based on masked language modeling usually requires massive computation resources, rendering such models quite expensive.",
"As shown in Figure 1, our proposed Equal contribution.",
"XLM-E achieves a huge speedup compared with well-tuned pretrained models.",
"In this paper, we introduce ELECTRA-style tasks (Clark et al., 2020b) to cross-lingual language model pre-training.",
"Specifically, we present two discriminative pre-training tasks, namely multilingual replaced token detection, and translation replaced token detection.",
"Rather than recovering masked tokens, the model learns to distinguish the replaced tokens in the corrupted input sequences.",
"The two tasks build input sequences by replacing tokens in multilingual sentences, and translation pairs, respectively.",
"We also describe the pretraining algorithm of our model, XLM-E, which is pretrained with the above two discriminative tasks.",
"It provides a more compute-efficient and sample-efficient way for cross-lingual language model pretraining.",
"We conduct extensive experiments on the XTREME cross-lingual understanding benchmark to evaluate and analyze XLM-E.",
"Over seven datasets, our model achieves competitive results with the baseline models, while only using 1% of the computation cost comparing to XLM-R.",
"In addition to the high computational efficiency, our model also shows the cross-lingual transferability that achieves a reasonably low transfer gap.",
"We also show that the discriminative pre-training encourages universal representations, making the text representations better aligned across different languages.",
"Our contributions are summarized as follows: We explore ELECTRA-style tasks for cross-lingual language model pre-training, and pretrain XLM-E with both multilingual corpus and parallel data.",
"We demonstrate that XLM-E greatly reduces the computation cost of cross-lingual pretraining.",
"We show that discriminative pre-training tends to encourage better cross-lingual transferability.",
"ELECTRAELECTRA (Clark et al., 2020b) introduces the replaced token detection task for language model pre-training, with the goal of distinguishing real input tokens from corrupted tokens.",
"That means the text encoders are pretrained as discriminators rather than generators, which is different from the previous pretrained language models, such as BERT (De-vlin et al., 2019), that learn to predict the masked tokens.",
"ELECTRA trains two Transformer (Vaswani et al., 2017) encoders, serving as generator and discriminator, respectively.",
"The generator G is typically a small BERT model trained with the masked language modeling (MLM; Devlin et al. 2019) task.",
"Consider an input sentence x = { x i } ni =1 containing n tokens.",
"MLM first randomly selects a subset M { 1 , . . . , n } as the positions to be masked, and construct the masked sentence x masked by replacing tokens in M with [MASK] .",
"Then, the generator predicts the probability distributions of the masked tokens p G ( x | x masked ) .",
"The loss function of the generator G is: LG ( x ; G ) = (cid:88) i M log p G ( x i | x masked ) .",
"The discriminator D is trained with the replaced token detection task.",
"Specifically, the discriminator takes the corrupted sentences x corrupt as input, which is constructed by replacing the tokens in M with the tokens sampled from the generator G : (cid:40) x corrupt i p G ( x i | x masked ) , i M x corrupt i = x i , i (cid:54) M (2) Then, the discriminator predicts whether x corrupt i is original or sampled from the generator.",
"where z i represents the label of whether x corrupt i is the original token or the replaced one.",
"The final loss function of ELECTRA is the combined loss of the generator and discriminator losses, LE = LG + LD .",
"Compared to generative pre-training, ELECTRA uses more model parameters and training FLOPs per step, because it contains a generator and a discriminator during pre-training.",
"However, only the discriminator is used for fine-tuning on downstream tasks, so the size of the final checkpoint is similar to BERT-like models in practice.",
"Figure 2 shows an overview of the two discriminative tasks used for pre-training XLM-E.",
"Similar to ELECTRA described in Section 2, XLM-E has two Transformer components, i.e., generator and discriminator.",
"The generator predicts the masked tokens given the masked sentence or translation pair, and the discriminator distinguishes whether the tokens are replaced by the generator.",
"The pre-training tasks of XLM-E are multilingual replaced token detection (MRTD), and translation replaced token detection (TRTD).",
"The multilingual replaced token detection task requires the model to distinguish real input tokens from",
"corrupted multilingual sentences.",
"Both the generator and the discriminator are shared across languages.",
"The vocabulary is also shared for different languages.",
"The task is the same as in monolingual ELECTRA pre-training (Section 2).",
"The only difference is that the input texts can be in various languages.",
"We use uniform masking to produce the corrupted positions.",
"We also tried span masking (Joshi et al., 2019; Bao et al., 2020) in our preliminary experiments.",
"The results indicate that span masking significantly weakens the generator's prediction accuracy, which in turn harms pre-training.",
"Translation Replaced Token Detection Parallel corpora are easily accessible and proved to be effective for learning cross-lingual language models (Conneau and Lample, 2019; Chi et al., 2021b), while it is under-studied how to improve discriminative pre-training with parallel corpora.",
"We introduce the translation replaced token detection task that aims to distinguish real input tokens from translation pairs.",
"Given an input translation pair, the generator predicts the masked tokens in both languages.",
"Consider an input translation pair ( e , f ) .",
"We construct the input sequence by concatenating the translation pair as a single sentence.",
"The loss function of the generator G is: LG ( e , f ; G ) = (cid:88) i M e log p G ( e i | [ e ; f ] masked ) (cid:88) i M f log p G ( f i | [ e ; f ] masked ) where [; ] is the operator of concatenation, and M e , M f stand for the randomly selected masked positions for e and f , respectively.",
"This loss function is identical to the translation language modeling loss (TLM; Conneau and Lample 2019).",
"The discriminator D learns to distinguish real input tokens from the corrupted translation pair.",
"The corrupted translation pair ( e corrupt , f corrupt ) is constructed by replacing tokens with the tokens sampled from G with the concatenated translation pair as input.",
"Formally, e corrupt is constructed by (cid:40) e corrupt i p G ( e i | [ e ; f ] masked ) , i M e e corrupt i = e i , i (cid:54) M e (4) The same operation is also used to construct f corrupt .",
"Then, the loss function of the discriminator D can be written as LD ( e , f ; D ) = n e + n f (cid:88) i =1 log p D ( r i | [ e ; f ] corrupt ) (5) where r i represents the label of whether the i -th input token is the original one or the replaced one.",
"The final loss function of the translation replaced token detection task is LG + LD .",
"The XLM-E model is jointly pretrained with the masked language modeling, translation language modeling, multilingual replaced token detection and the translation replaced token detection tasks.",
"The overall training objective is to minimize L = LMLM ( x ; G ) + LTLM ( e , f ; G ) + LMRTD ( x ; D ) + LTRTD ( e , f ; D ) over large scale multilingual corpus X = { x } and parallel corpus P = { ( e , f ) } .",
"the generator and the discriminator from scratch.",
"Following Clark et al. (2020b), we make the generator smaller to improve the pre-training efficiency.",
"We propose to use gated relative position bias in the self-attention mechanism.",
"Given input tokens { x i } | x | i =1 , let { h i } | x | i =1 denote their hidden states in Transformer.",
"The self-attention outputs { h i } | x | i =1 are computed via: q i , k i , v i = h i WQ , h i WK , h i WV (6) a ij exp { q i k j d k + r i j } (7) h i = | x | (cid:88) j =1 a ij v i (8) where r i j represents gated relative position bias, each h i is linearly projected to a triple of query, key and value using parameter matrices WQ , WK , WV R d h d k , respectively.",
"Inspired by the gating mechanism of Gated Recurrent Unit (GRU; Cho et al. 2014), we compute gated relative position bias r i j via: g (update) , g (reset) = ( q i u ) , ( q i v ) r i j = wg (reset) d i j r i j = d i j + g (update) d i j + (1 g (update) ) r i j where d i j is learnable relative position bias, the vectors u , v R d k are parameters, is a sigmoid function, and w is a learnable value.",
"Compared with relative position bias (Parikh et al., 2016; Raffel et al., 2020; Bao et al., 2020), the proposed gates take the content into consideration, which adaptively adjusts the relative position bias by conditioning on input tokens.",
"Intuitively, the same distance between two tokens tends to play different roles in different languages.",
"Data We use the CC-100 (Conneau et al., 2020) dataset for the replaced token detection task.",
"CC-100 contains texts in 100 languages collected from the CommonCrawl dump.",
"We use parallel corpora for the translation replaced token detection task, including translation pairs in 100 languages collected from MultiUN (Ziemski et al., 2016), IIT Bombay (Kunchukuttan et al., 2018), OPUS (Tiede-mann, 2012), WikiMatrix (Schwenk et al., 2019), and CCAligned (El-Kishky et al., 2020).",
"Following XLM (Conneau and Lample, 2019), we sample multilingual sentences to balance the language distribution.",
"Formally, consider the pretraining corpora in N languages with m j examples for the j -th language.",
"The probability of using an example in the j -th language is p j = m j (cid:80) Nk =1 m k (9) The exponent controls the distribution such that a lower increases the probability of sampling examples from a low-resource language.",
"In this paper, we set = 0 .",
"7 .",
"Model We use a Base-size 12 -layer Transformer (Vaswani et al., 2017) as the discriminator, with hidden size of 768 , and FFN hidden size of 3 , 072 .",
"The generator is a 4 -layer Transformer using the same hidden size as the discriminator (Meng et al., 2021).",
"See Appendix A for more details of model hyperparameters.",
"Training We jointly pretrain the generator and the discriminator of XLM-E from scratch, using the Adam (Kingma and Ba, 2015) optimizer for 125K training steps.",
"We use dynamic batching of approximately 1M tokens for each pre-training task.",
"We set , the weight for the discriminator objective to 50.",
"The whole pre-training procedure takes about 1.7 days on 64 Nvidia A100 GPU cards.",
"See Appendix B for more details of pre-training hyperparameters.",
"We evaluate XLM-E on the XTREME (Hu et al., 2020b) benchmark, which is a multilingual multitask benchmark for evaluating cross-lingual understanding.",
"The XTREME benchmark contains seven cross-lingual understanding tasks, namely part-of-speech tagging on the Universal Dependencies v2.5 (Zeman et al., 2019), NER named entity recognition on the Wikiann (Pan et al., 2017; Rahimi et al., 2019) dataset, cross-lingual natural language inference on XNLI (Conneau et al., 2018), cross-lingual paraphrase adversaries from word scrambling (PAWS-X; Yang et al. 2019a), and cross-lingual question answering on MLQA (Lewis et al., 2020), XQuAD (Artetxe et al., 2020), and TyDiQA-GoldP (Clark et al., 2020a).",
"Baselines We compare our XLM-E model with the cross-lingual language models pretrained with multilingual text, i.e., Multilingual BERT ( MBERT ; Devlin et al. 2019), M T5 (Xue et al., 2021), and XLM-R (Conneau et al., 2020), or pretrained with both multilingual text and parallel corpora, i.e., XLM (Conneau and Lample, 2019), INFOXLM (Chi et al., 2021b), and XLM-ALIGN (Chi et al., 2021c).",
"The compared models are all in Base size.",
"In what follows, models are considered as in Base size by default.",
"Results We use the cross-lingual transfer setting for the evaluation on XTREME (Hu et al., 2020b), where the models are first fine-tuned with the English training data and then evaluated on the target languages.",
"In Table 1, we report the accuracy, F1, or Exact-Match (EM) scores on the XTREME cross-lingual understanding tasks.",
"The results are averaged over all target languages and five runs with different random seeds.",
"We divide the pretrained models into two categories, i.e., the models pretrained on multilingual corpora, and the models pretrained on both multilingual corpora and parallel corpora.",
"For the first setting, we pretrain XLM-E with only the multilingual replaced token detection task.",
"From the results, it can be observed that XLM-E outperforms previous models on both settings, achieving the averaged scores of 67.6 and 69.3, respectively.",
"Compared to XLM-R, XLM-E (w/o TRTD) produces an absolute 1.2 improvement on average over the seven tasks.",
"For the second setting, compared to XLM-ALIGN , XLM-E produces an absolute 0.4 improvement on average.",
"XLM-E performs better on the question answering Model XNLI MLQA XLM (reimplementation) 73.4 66.2 / 47.8 TLM 70.6 64.0 / 46.0 XLM-E 76.6 68.3 / 49.8 TRTD 75.1 67.8 / 49.7 TRTD Gated relative position bias 75.2 67.4 / 49.2 Table 2: Ablation studies of XLM-E.",
"Despite the effectiveness of XLM-E, our model requires substantially lower computation cost than XLM-R and XLM-ALIGN .",
"A detailed efficiency analysis in presented in Section 4.5.",
"For a deeper insight to XLM-E, we conduct ablation experiments where we first remove the TRTD task and then remove the gated relative position bias.",
"Besides, we reimplement XLM that is pretrained with the same pre-training setup with XLM-E, i.e., using the same training steps, learning rate, etc.",
"Table 2 shows the ablation results on XNLI and MLQA.",
"Removing TRTD weakens the performance of XLM-E on both downstream tasks.",
"On this basis, the results on MLQA further decline when removing the gated relative position bias.",
"This demonstrates that XLM-E benefits from both TRTD and the gated relative position bias during pre-training.",
"Besides, XLM-E substantially outperform XLM on both tasks.",
"Notice that when removing the two components from XLM-E, our 6174 Model Size Params XNLI MLQA XLM-E Base 279M 76.6 68.3 / 49.8 XLM-E Large 840M 81.3 72.7 / 54.2 XLM-E XL 2.2B 83.7 76.2 / 57.9 XLM-R XL 3.5B 82.3 73.4 / 55.3 M T5 XL 3.7B 82.9 73.5 / 54.5 Table 3: Results of scaling-up the model size.",
"model only requires a multilingual corpus, but still achieves better performance than XLM, which uses an additional parallel corpus.",
"Scaling-up model size has shown to improve performance on cross-lingual downstream tasks (Xue et al., 2021; Goyal et al., 2021).",
"We study the scal-ability of XLM-E by pre-training XLM-E models using larger model sizes.",
"We consider two larger model sizes in our experiments, namely Large and XL.",
"Detailed model hyperparameters can be found in Appendix A. As present in Table 3, XLM-EXL achieves the best performance while using significantly fewer parameters than its counterparts.",
"Besides, scaling-up the XLM-E model size consistently improves the results, demonstrating the effectiveness of XLM-E for large-scale pre-training.",
"We present a comparison of the pre-training resources, to explore whether XLM-E provides a more compute-efficient and sample-efficient way for pre-training cross-lingual language models.",
"Table 4 compares the XTREME average score, the number of parameters, and the pre-training computation cost.",
"Notice that INFOXLM and XLM-ALIGN are continue-trained from XLM-R, so the total training FLOPs are accumulated over XLM-R.",
"Table 4 shows that XLM-E substantially reduces the computation cost for cross-lingual language model pre-training.",
"Compared to XLM-R and XLM-ALIGN that use at least 9.6e21 training Model Tatoeba-14 Tatoeba-36 en xx xx en en xx xx en XLM-R 59.5 57.6 55.5 53.4 INFOXLM 80.6 77.8 68.6 67.3 XLM-E 74.4 72.3 65.0 62.3 TRTD 55.8 55.1 46.4 44.6 Table 5: Average accuracy@1 scores for Tatoeba cross-lingual sentence retrieval.",
"FLOPs, XLM-E only uses 9.5e19 training FLOPs in total while even achieving better XTREME performance than the two baseline models.",
"For the setting of pre-training with only multilingual corpora, XLM-E (w/o TRTD) also outperforms XLM-R using 6.3e19 FLOPs in total.",
"This demonstrates the compute-effectiveness of XLM-E, i.e., XLM-E as a stronger cross-lingual language model requires substantially less computation resource.",
"To explore whether discriminative pre-training improves the resulting cross-lingual representations, we evaluate our model on the sentence-level and word-level alignment tasks, i.e., cross-lingual sentence",
"sentence retrieval and word alignment.",
"We use the Tatoeba (Artetxe and Schwenk, 2019) dataset for the cross-lingual sentence retrieval task, the goal of which is to find translation pairs from the corpora in different languages.",
"Tatoeba consists of English-centric parallel corpora covering 122 languages.",
"Following Chi et al. (2021b) and Hu et al. (2020b), we consider two settings where we use 14 and 36 of the parallel corpora for evaluation, respectively.",
"The sentence representations are obtained by average pooling over hidden vectors from a middle layer.",
"Specifically, we use layer-7 for XLM-R and layer-9 for XLM-E.",
"Then, the translation pairs are induced by the nearest neighbor search using the cosine similarity.",
"Table 5 shows the average accuracy@1 scores under the two settings of Tatoeba for both the xx en and en xx directions.",
"XLM-E achieves 74.4 and 72.3 accuracy scores for Tatoeba-14, and 65.0 and 62.3 accuracy scores for Tatoeba-36, providing notable improvement over XLM-R.",
"XLM-E performs slightly worse than INFOXLM.",
"We believe the cross-lingual contrast (Chi et al., 2021b) task explicitly learns the sentence representations, which makes INFOXLM more effective for the cross-lingual sentence retrieval task.",
"For the word-level alignment, we use the word alignment datasets from EuroParl 1 , WPT2003 2 , and WPT2005 3 , containing 1,244 translation pairs annotated with golden alignments.",
"The predicted alignments are evaluated by alignment error rate (AER; Och and Ney 2003): AER = 1 | A S | + | A P | | A | + | S | (10) where A, S, and P stand for the predicted alignments, the annotated sure alignments, and the annotated possible alignments, respectively.",
"In Table 6 we compare XLM-E with baseline models, i.e., fast align (Dyer et al., 2013), XLM-R, and XLM-ALIGN .",
"The resulting word alignments are obtained by the optimal transport method (Chi et al., 2021c), where the sentence representations are from the 9 -th layer of XLM-E.",
"Over the four language pairs, XLM-E achieves lower AER scores than the baseline models, reducing the average AER from 21 .",
"05 to 19.32.",
"It is worth mentioning that our model requires substantial lower computation costs than the other cross-lingual pretrained language models to achieve such low AER scores.",
"See the detailed training efficiency analysis in Section 4.5.",
"It is worth mentioning that XLM-E shows notable improvements over XLM-E (w/o TRTD) on both tasks, demonstrating that the translation replaced token detection task is effective for cross-lingual alignment.",
"We evaluate the word-level and sentence-level representations over different layers to explore",
"As shown in Figure 3, we illustrate the accu-racy@1 scores of XLM-E and XLM-R on Tatoeba cross-lingual sentence retrieval, using sentence representations from different layers.",
"For each layer, the final accuracy score is averaged over all the 36 language pairs in both the xx en and en xx directions.",
"From the figure, it can be observed that XLM-E achieves notably higher averaged accuracy scores than XLM-R for the top layers.",
"The results of XLM-E also show a parabolic trend across layers, i.e., the accuracy continuously increases before a specific layer and then continuously drops.",
"This trend is also found in other cross-lingual language models such as XLM-R and XLM-Align (Jalili Sabet et al., 2020; Chi et al., 2021c).",
"Different from XLM-R that achieves the highest accuracy of 54.42 at layer-7, XLM-E pushes it to layer-9, achieving an accuracy of 63.66.",
"At layer-10, XLM-R only obtains an accuracy of 43.34 while XLM-E holds the accuracy score as high as 57.14.",
"(AER) scores of XLM-E and XLM-R on the word alignment task.",
"We use the hidden vectors from different layers to perform word alignment, where layer-0 stands for the embedding layer.",
"The final AER scores are averaged over the four test sets in different languages.",
"Figure 4 shows a similar trend to that in Figure 3, where XLM-E not only provides substantial performance improvements over XLM-R, but also pushes the best-performance layer to a higher layer, i.e., the model obtains the best performance at layer-9 rather than a lower layer such as layer-7.",
"On both tasks, XLM-E shows good performance for the top layers, even though both XLM-E and XLM-R use the Transformer (Vaswani et al., 2017) architecture.",
"Compared to the masked language modeling task that encourages the top layers to be language-specific, discriminative pre-training makes XLM-E producing better-aligned text representations at the top layers.",
"It indicates that the cross-lingual discriminative pre-training encourages universal representations inside the model.",
"We analyze the cross-lingual transfer gap (Hu et al., 2020b) of the pretrained cross-lingual language models.",
"The transfer gap score is the difference between performance on the English test set and the average performance on the test set in other languages.",
"This score suggests how much end task knowledge has not been transferred to other languages after fine-tuning.",
"A lower gap score indicates better cross-lingual transferability.",
"Table 7 compares the cross-lingual transfer gap scores on five of the XTREME tasks.",
"We notice that XLM-E obtains the lowest gap score only on PAWS-X.",
"Nonetheless, it still achieves reasonably low gap scores on the other tasks with such low computation cost, demonstrating the cross-lingual transferability of XLM-E.",
"We believe that it is more difficult to achieve the same low gap scores when the model obtains better performance.",
"Learning self-supervised tasks on large-scale multilingual texts has proven to be effective for pretraining cross-lingual language models.",
"Masked language modeling (MLM; Devlin et al. 2019) is typically used to learn cross-lingual encoders such as multilingual BERT (mBERT; Devlin et al. 2019) and XLM-R (Conneau et al., 2020).",
"The cross-lingual language models can be further improved by introducing external pre-training tasks using parallel corpora.",
"XLM (Conneau and Lample, 2019) introduces the translation language modeling (TLM) task that predicts masked tokens from concatenated translation pairs.",
"ALM (Yang et al., 2020) utilizes translation pairs to construct code-switched sequences as input.",
"InfoXLM (Chi et al., 2021b) considers an input translation pair as cross-lingual views of the same meaning, and proposes a cross-lingual contrastive learning task.",
"Several pre-training tasks utilize the token-level alignments in parallel data to improve cross-lingual language models (Cao et al., 2020; Zhao et al., 2021; Hu et al., 2020a; Chi et al., 2021c).",
"In addition, parallel data are also employed for cross-lingual sequence-to-sequence pre-training.",
"XNLG (Chi et al., 2020) presents cross-lingual masked language modeling and cross-lingual auto-encoding for cross-lingual natural language generation, and achieves the cross-lingual transfer for NLG tasks.",
"VECO (Luo et al., 2020) utilizes cross-attention MLM to pretrain a variable cross-lingual language model for both NLU and NLG.",
"mT6 (Chi et al., 2021a) improves mT5 (Xue et al., 2021) by learning the translation span corruption task on parallel data.",
"LM (Ma et al., 2021) proposes to align pretrained multilingual encoders to improve cross-lingual sequence-to-sequence pre-training.",
"We introduce XLM-E, a cross-lingual language model pretrained by ELECTRA-style tasks.",
"Specifically, we present two pre-training tasks, i.e., multilingual replaced token detection, and translation replaced token detection.",
"XLM-E outperforms baseline models on cross-lingual understanding tasks although using much less computation cost.",
"In addition to improved performance and computational efficiency, we also show that XLM-E 6177 obtains the cross-lingual transferability with a reasonably low transfer gap.",
"Alexis Conneau and Guillaume Lample.",
"2019.",
"Cross-lingual language model pretraining.",
"In Advances in Neural Information Processing Systems , pages 70577067.",
"Curran Associates, Inc. 6178 Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov.",
"Our work introduces ELECTRA-style tasks for cross-lingual language model pre-training, which requires much less computation cost than previous models and substantially reduces the energy cost.",
"Heyan Huang is the corresponding author.",
"Zewen Chi, Xian-Ling Mao, and Heyan Huang are supported by National Key R&D Plan (No. 2018YFB1005100), National Natural Science Foundation of China (No. U19B2020, 62172039, 61732005, 61602197 and L1924068), the funds of Beijing Advanced Innovation Center for Language Resources (No. TYZ19005), and in part by CCF-AFSG Research Fund under Grant No.RF20210005, and in part by the fund of Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL)."
] | [
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"result",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other"
] |
[
"Moderation is crucial to promoting healthy online discussions.",
"Although several toxicity' detection datasets and models have been published, most of them ignore the context of the posts, implicitly assuming that comments may be judged independently.",
"We investigate this assumption by focusing on two questions:",
"(a) does context affect the human judgement, and",
"(b) does conditioning on context improve performance of toxicity detection systems?",
"We experiment with Wikipedia conversations, limiting the notion of context to the previous post in the thread and the discussion title.",
"We find that context can both amplify or mitigate the perceived toxicity of posts.",
"Moreover, a small but significant subset of manually labeled posts (5% in one of our experiments) end up having the opposite toxicity labels if the annotators are not provided with context.",
"Surprisingly, we also find no evidence that context actually improves the performance of toxicity classifiers, having tried a range of classifiers and mechanisms to make them context aware.",
"This points to the need for larger datasets of comments annotated in context.",
"We make our code and data publicly available.",
"Systems that detect abusive language are used to promote healthy conversations online and protect minority voices (Hosseini et al., 2017).",
"Apart from a growing volume of press articles concerning toxicity online, 1 there is increased research interest on detecting abusive and other unwelcome comments labeled toxic' by moderators, both for English and other languages.",
"2 However, the vast majority of 1 Following the work of Wulczyn et al. (2017) and Borkan et al. (2019), toxicity is defined as a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion (Wulczyn et al., 2017).",
"2 For English, see for example TRAC (Kumar et al., 2018), OFFENSEVAL (Zampieri et al., 2019b), or the recent Workshops on Abusive Language Online ( https://goo.gl/ PARENT All of his arguements are nail perfect, you're inherently stupid.",
"The lead will be changed.",
"TARGET Great argument!",
"PARENT Really?",
"It's schmucks like you (and Bush) who turn the world into the shithole it is today!",
"TARGET I'd be interested in the reasoning for that comment, personally.",
"(bounties)",
"PARENT Indeed.",
"Hitler was also strongly antipornography [...] it sure looks like Hitler is a hot potato that nobody wants to be stuck with.",
"TARGET Well I guess they won't approve the slogan Hitler hated porn.",
"PARENT ??",
"When did I attack you?",
"I definitely will present this to the arbcom, you should mind WP:CIVIL when participating in discussions in Wikipedia.",
"TARGETI blame you for my alcoholism add that too Table 1: Comments that are not easily labeled for toxicity without the parent' (previous) comment.",
"current datasets do not include the preceding comments in a conversation and such context was not shown to the annotators who provided the gold toxicity labels.",
"Consequently, systems trained on these datasets ignore the conversational context.",
"For example, a comment like nope, I don't think so may not be judged as rude or inflammatory by such a system, but the system's score would probably be higher if the system could also consider the previous (also called parent ) comment might it be that I am sincere?.",
"Table 1 shows additional examples of comments that are not easily judged for toxicity without the parent comment.",
"Interestingly, even basic statistics on how often context affects the perceived toxicity of online posts have not been published.",
"Hence, in this paper we focus on the following two foundational research questions: RQ 1: How often does context affect the toxicity of posts as perceived by humans in online conversations?",
"And how often does context amplify or mitigate the perceived toxicity?",
"9HmSzc ).",
"For other languages, see for example the German GERMEVAL ( https://goo.gl/uZEerk ).",
"COMMENT WITH TOXICITYAMPLIFIEDIN CONTEXTPARENT But what if the user is a lesbian?",
"Then what?",
"TARGET Pigs Are People Too.",
"Avant-garde a clue",
"COMMENT WITH TOXICITYMITIGATEDIN CONTEXTPARENT Hmmm.",
"The flame on top of the gay pride emblem can probably be interpreted in a manner that I did not consider.",
"Perhaps one icon on each end using?",
"TARGET Hi Gadget, interpreted in what manner?",
"Flaming gays?",
"Or Burn a gay?",
"RQ 2: Does context actually improve the performance of toxicity classifiers, when they are made context-aware?",
"And how can toxicity classifiers be made context-aware?",
"To investigate these questions we created and make publicly available two new toxicity datasets that include context, which are based on discussions in Wikipedia Talk Pages (Hua et al., 2018).",
"The first one is a small dataset of 250 comments, created in an AB test fashion, where two different groups of annotators (crowd-workers) were employed.",
"One group annotated the comments without context, while the other group was given the same comments, this time along with the parent comment and the title of the thread as context.",
"We used this dataset to show that the perceived toxicity of a significant subset of posts (5.2% in our experiment) changes when context is (or is not) provided.",
"We conclude that a small but significant subset of manually labeled posts end up having wrong toxicity labels if the annotators are not provided with context.",
"We also found that context can both amplify (approximately 3.6% of comments in our experiment) and mitigate (approx. 1.6%) the perceived toxicity.",
"Examples of comments that were differently labeled with and without context are shown in Table 2.",
"To investigate the second question, concerning the effect of context on the performance of toxicity classifiers, we created a larger dataset of 20k comments; 10k comments were annotated out of context, 10k in context.",
"This time we did not require the same comments to be annotated with and without context, which allowed us to crowd-source the collection of a larger set of annotations.",
"These two new subsets were used to train several toxicity detection classifiers, both context-aware and context-unaware, which were evaluated on held out comments that we always annotated in context (based on the assumption that in-context labels are more reliable).",
"Surprisingly, we found no evidence that context actually improves the performance of toxicity classifiers.",
"We tried a range of classifiers and mechanisms to make them context aware, and having also considered the effect of using gold labels obtained out of context or by showing context to the annotators.",
"This finding is likely related to the small number of context-sensitive comments.",
"In turn this suggests that an important direction for further research is how to efficiently annotate larger corpora of comments in context.",
"We make our code and data publicly available.",
"3 2 Related Work Toxicity detection has attracted a lot of attention in recent years (Nobata et al., 2016; Pavlopoulos et al., 2017b; Park and Fung, 2017; Wulczyn et al., 2017).",
"Here we use the term toxic' as an umbrella term, but we note that the literature uses several terms for different kinds of toxic language or related phenomena: offensive' (Zampieri et al., 2019a), abusive' (Pavlopoulos et al., 2017a), hateful' (Djuric et al., 2015; Malmasi and Zampieri, 2017; ElSherief et al., 2018; Gamback and Sikdar, 2017; Zhang et al., 2018), etc.",
"There are also taxonomies for these phenomena based on their directness (e.g., whether the abuse was unambiguously implied/denoted or not), and their target (e.g., whether it was a general comment or targeting an individual/group) (Waseem et al., 2017).",
"Other hierarchical taxonomies have also been defined (Zampieri et al., 2019a).",
"While most previous work does not address toxicity in general, instead addressing particular subtypes, toxicity and its subtypes are strongly related, with systems trained to detect toxicity being effective also at subtypes, such as hateful language (van Aken et al., 2018).",
"As is customary in natural language processing, we focus on aggregate results when hoping to answer our research questions, and leave largely unanswered the related epistemological questions when this does not preclude using classifiers in real-world applications.",
"Table 3 lists all currently available public datasets for the various forms of toxic language that we are aware of.",
"The two last columns show that 3 https://github.com/ipavlopoulos/ context_toxicity Dataset Name Source Size Type Lang.",
"no existing English dataset provides both context (e.g., parent comment) and context-aware annotations (annotations provided by humans who also considered the parent comment).",
"Both small and large toxicity datasets have been developed, but approximately half of them contain tweets, which makes reusing the data difficult, because abusive tweets are often removed by the platform.",
"Moreover, the textual content is not available under a license that allows its storage outside the platform.",
"The hateful language detection dataset of Waseem and Hovy (2016), for example, contains 1,607 sexism and racism annotations for ID s of English tweets.",
"A larger dataset was published by Davidson et al. (2017), containing approx.",
"25k annotations for tweetID s, collected using a lexicon of hateful terms.",
"Research on forms of abusive language detection is mainly focused on English (6 out of 10 datasets), but datasets in other languages also exist, such as Greek (Pavlopoulos et al., 2017a), Arabic (Mubarak et al., 2017), and German (Ross et al., 2016; Wiegand et al., 2018).",
"A common characteristic of most of the datasets listed in Table 3 is that, during annotation, the human workers were not provided with, nor instructed to review, the context of the target text.",
"Context such as the preceding comments in the thread, or the title of the article being discussed, or the discussion topic.",
"A notable exception is the work of Gao and Huang (2017), who annotated hateful comments under Fox News articles by also considering the title of the news article and the preceding comments.",
"However, this dataset has three major shortcomings.",
"First, the dataset is very small, comprising approximately 1.5k posts retrieved from the discussion threads of only 10 news articles.",
"Second, the authors did not release sufficient information to reconstruct the threads and allow systems to consider the parent comments.",
"Third, only a single annotator was used for most of the comments, which makes the annotations less reliable.",
"Two other datasets, both non English, also include context-aware annotations.",
"Mubarak et al. (2017) provided the title of the respective news article to the annotators, but ignored parent comments.",
"This is problematic when new comments change the topic of the discussion and when replies require the previous posts to be judged.",
"Pavlopoulos et al. (2017a) used professional moderators, who were monitoring entire threads and were thus able to use the context of the thread to judge for the toxicity of the comments.",
"However, the plain text of the comments for this dataset is not available, which makes further analysis difficult.",
"Moreover, crucially for this study, the context of the comments was not released in any form.",
"In summary, of the datasets we know of (Ta-ble 3), only two include context (Gao and Huang, 2017; Mubarak et al., 2017), and this context is limited to the title of the news article the comment was about.",
"As discussed above, Gao and Huang (2017) include the parent comments in their dataset, but without sufficient information to link the target comments to the parent ones.",
"Hence no toxicity dataset includes the raw text of both target and parent comments with sufficient links between the two .",
"This means that toxicity detection methods cannot exploit the conversational context when being trained on existing datasets.",
"Using previous comments of a conversation or preceding sentences of a document is not uncommon in text classification and language modeling.",
"Mikolov and Zweig (2012), for example, used LDA to encode the preceding sentences and pass the en-Dataset Statistics CAT-SMALL CAT-LARGE #comments ( N / C ) 250 10k/10k avg.",
"coded sentence history to an RNN language model (Blei et al., 2003).",
"Their approach achieved state of the art language modeling results and was used as an alternative solution (e.g., to LSTM s) for the problem of vanishing gradients.",
"Sordoni et al. (2015) experimented with concatenating consecutive utterances (or their representations) before passing them to an RNN to generate conversational responses.",
"They reported gains up to 11% in BLEU (Papineni et al., 2002).",
"Ren et al. (2016) reported significant gains in Twitter sentiment classification, when adding contextual features.",
"To investigate how often context affects the perceived toxicity of posts, we created CAT-SMALL , a small Context-Aware Toxicity dataset of 250 randomly selected comments from the Wikipedia Talk Pages (Table 4).",
"We gave these comments to two groups of crowd-workers to judge their toxicity.",
"The first group ( GC , Group with Context) was also given access to the parent comment and the discussion title, while the second group ( GN , Group with No context) was provided with no context.",
"No annotator could belong to both groups, to exclude the case of an annotator having seen the context of a post and then being asked to label the same post without its context.",
"We used the Figure Eight crowd-sourcing platform, which provided us with these mutually exclusive groups of annotators.",
"4 We collected three judgments per comment, per group.",
"All comments were between 10 and 400 characters long.",
"Their depth in their threads was from 2 4 See https://www.figure-eight.com/ .",
"The annotators were high-performing workers from previous jobs.",
"The demographics and backgrounds of the crowdworkers are detailed in Posch et al. (2018).",
"We used the parent comment and discussion title only, instead of a larger context (e.g., the entire thread), to speed up our machine learning experiments, and also because reading only the previous comment and the discussion title made the manual annotation easier.",
"In preliminary experiments, we observed that including more preceding comments had the side effect of workers tending to ignore the context completely.",
"5 We addressed this problem by asking the annotators an extra question: Was the parent comment less, more, or equally toxic?",
"For each comment and group of annotators, the toxicity scores of the annotators were first averaged and rounded to the nearest binary decision, as in Table",
"4. Figure 1 shows that the toxicity ratio (toxic comments over total) of CAT-SMALL is higher when annotators are given context ( GC ), compared to when no context is provided ( GN ).",
"A one-sided Wilcoxon-Mann-Whitney test shows this is a statistically significant increase.",
"This is a first indication that providing context to annotators affects their decisions.",
"The toxicity ratio increases by 2 percentage points (4.4% to 6.4%) when context is provided, but this is an aggregated result, possibly hiding the true size of the effect of context.",
"The perceived toxicity of some comments may be increasing when context is provided, but for other comments it may be decreasing, and these effects may be partially cancelling each other when measuring the change in toxicity ratio.",
"To get a more accurate picture of the effect of 5 We experimented with providing the GC annotators with all the parent comments in the discussion.",
"We also experimented with preselection strategies, such as employing the score from a pre-trained toxicity classifier for a stratified selection and using a list of terms related to minority groups.",
"context, we measured the number of comments of CAT-SMALL for which the (averaged and rounded) toxicity label was different between the two groups ( GN , GC ).",
"We found that the toxicity of 4 comments out of 250 (1.6%) decreased with context, while the toxicity of 9 comments (3.6%) increased.",
"Hence, perceived toxicity was affected for 13 comments (5.2% of comments).",
"While the small size of CAT-SMALL does not allow us to produce accurate estimates of the frequency of posts whose perceived toxicity changes with context, the experiments on CAT-SMALL indicate that context has a statistically significant effect on the perceived toxicity, and that context can both amplify or mitigate the perceived toxicity, thus making a first step to addressing our first research question ( RQ 1).",
"Nevertheless, larger annotated datasets need to be developed to estimate more accurately the frequency of context-sensitive posts in online conversations, and how often context amplifies or mitigates toxicity.",
"To investigate whether adding context can benefit toxicity detection classifiers, we could not use CATSMALL , because its 250 comments are too few to effectively train a classifier.",
"Thus, we proceeded with the development of a larger dataset.",
"Although the best approach would be to extend CAT-SMALL , which had two mutually exclusive groups of annotators labeling each comment, we found that the annotation process was very slow in that case, largely because of the small size of annotator groups we had access to in Figure Eight (19 and 23 for GC and GN respectively).",
"6 By contrast, when we did not request mutually exclusive annotator groups, we could get many more workers (196 and 286 for GC and GN respectively) and thus annotation became significantly faster.",
"For this larger dataset, dubbed CAT-LARGE , we annotated 20k randomly selected comments from Wikipedia Talk Pages.",
"10k comments were annotated by human workers who only had access to the comment in question (group with no context, GN ).",
"The other 10k comments were annotated by providing the annotators also with the parent comment and the title of the discussion (group with context, GC ).",
"Each comment was annotated by three workers.",
"We selected comments of length from 10 and 400 characters, with depth in thread from 2 (direct 6 Figure Eight provided us with the two mutually exclusive annotator groups, which could not grow in size.",
"reply) to",
"5. Inter-annotator agreement was computed with Krippendorff's alpha on 123 texts, and it was found to be 0.72% for GN and 0.70% for GC .",
"Figure 2 shows that the toxicity ratio increased (from 0.6% to 1.5%) when context was given to the annotators.",
"A one-sided Wilcoxon-Mann-Whitney test shows this is a statistically significant increase ( P < . 001 ).",
"Again, the change of toxicity ratio is an indication that context does affect the perceived toxicity, but it does not accurately show how many comments are affected by context, since the perceived toxicity may increase for some comments when context is given, and decrease for others.",
"Unlike CAT-SMALL , in CAT-LARGE we cannot count for how many comments the perceived toxicity increased or decreased with context, because the two groups of annotators ( GN , GC ) did not annotate the same comments.",
"The toxicity ratios of CAT-LARGE (Fig. 2) are lower than in CAT-SMALL (Fig. 1), though they both show a trend of increased toxicity ratio when context is provided.",
"The toxicity ratios of CAT-LARGE are more reliable estimates of toxicity in online conversations, since they are based on a much larger dataset.",
"We used CAT-LARGE to experiment with both context-insensitive and context-sensitive toxicity classifiers.",
"The former only consider the post being rated (the target comment), whereas the latter also consider the context (parent comment).",
"BILSTM Our first context-insensitive classifier is a bidirectional LSTM (Hochreiter and Schmidhu-ber, 1997).",
"On top of the concatenated last states (from the two directions) of the BILSTM , we add a feed-forward neural network ( FFNN ), consisting of a hidden dense layer with 128 neurons and tanh activations, then a dense layer leading to a single output neuron with a sigmoid that produces the toxicity probability.",
"We fix the bias term of the single output neuron to log TN , where T and N are the numbers of toxic and non-toxic training comments, respectively, to counter-bias against the majority (non-toxic) class.",
"7 This BILSTM -based model could, of course, be made more complex (e.g., by stacking more BILSTM layers, and including self-attention), but it is used here mainly to measure how much a relatively simple (by today's standards) classifier benefits when a context mechanism is added (see below).",
"BERT At the other end of complexity, our second context-insensitive classifier is BERT (Devlin et al., 2019), fine-tuned on the training subset of each experiment, with a task-specific classifier on top, fed with BERT 's top-level embedding of the [ CLS ] token.",
"We use BERT-BASE pre-trained on cased data, with 12 layers and 768 hidden units.",
"We only unfreeze the top three layers during fine-tuning, with a small learning rate (2e-05) to avoid catastrophic forgetting.",
"The task-specific classifier is the same FFNN as in the BILSTM classifier.",
"BERT-CCTK We also experimented with a BERT model that is the same as the previous one, but fine-tuned on a sample (first 100k comments) of the CCTK dataset (Table 3).",
"We used the general toxicity labels of that dataset, and fine-tuned for a single epoch.",
"The only difference of this model, compared to the previous one, is that it is fine-tuned on a much larger training set, which is available, however, only without context (no parent comments).",
"The annotators of the dataset were also not provided with context (Table 3).",
"PERSPECTIVE The third context-insensitive classifier is a CNN -based model for toxicity detection, trained on millions of user comments from online publishers.",
"It is publicly available through the PERSPECTIVE API .",
"8 The publicly available form of this model cannot be retrained, fine-tuned, or modified to include a context-awareness component.",
"Like BERT-CCTK , this model uses an external (but now much larger) labeled training set.",
"This training set is not publicly available, it does not include context, and was labeled by annotators who were not provided with context.",
"CA-BILSTM-BILSTM In a context-aware extension of the context-insensitive BILSTM classifier, dubbed CA-BILSTM-BILSTM , we added a second BILSTM to encode the parent comment (Fig. 3).",
"The vector representations of the two comments (last states from the two directions of both BILSTM s) are concatenated and passed to a FFNN , which is otherwise identical to the FFNN of the context-insensitive BILSTM .",
"CA-BILSTM-BERT We also used a BILSTM to encode the parent in a context-aware extension of the BERT -based classifier, called CA-BILSTM-BERT (Fig. 4).",
"Now BERT encodes the target comment, whereas a BILSTM (the same as in CA-BILSTMBILSTM ) encodes the parent.",
"(We could not use two BERT instances to encode both the parent and the target comment, because the resulting model did not fit in our GPU .)",
"The concatenated representations of the two comments are passed to a FFNN , which is otherwise the same as as in previous models.",
"BERT is fine-tuned on the training subset, as before, and the BILSTM encoder of the parent is jointly trained (with a larger learning rate).",
"CA-SEP-BERT We also experimented with another context-aware version of the BERT -based classifier, dubbed CA-SEP-BERT .",
"This model concatenates the text of the parent and target comments, separated by BERT 's [ SEP ] token, as in BERT 's next sentence prediction pre-training task (Fig. 5).",
"Unlike CA-BILSTM-BERT , it does not use a separate encoder for the parent comment.",
"The model is again fine-tuned on the training subset.",
"CA-CONC-BERT-CCTK , CA-CONC-PERSPECTIVE These are exactly the same as BERT-CCTK and PERSPECTIVE , respectively, trained on the same data as before (no con-text), but at test time they are fed with the concatenation of the text of the parent and target comment, as a naive context-awareness mechanism.",
"Table 5 reports ROC AUC scores, averaged over a 5-fold Monte Carlo ( MC ) cross-validation, i.e., using 5 different random training/development/test splits (Gorman and Bedrick, 2019); we also report the standard error of mean over the folds.",
"The models are trained on the training subset(s) of CAT-LARGEN (@ N models) or CAT-LARGE-C (@ C models), i.e., they are trained on comments with gold labels obtained without or with context shown to the annotators, respectively.",
"All models are always evaluated (in each fold) on the test subset(s) of CAT-LARGE-C , i.e., with gold labels obtained with context shown to annotators, assuming that those labels are more reliable (the annotators had a broader view of the discussion).",
"In each fold (split) of the MC cross-validation, the training, development, and test subsets are 60%, 20%, and 20% of the data, respectively, preserving in each subset the toxicity ratio of the entire dataset.",
"We always use the test (and development) subsets of CAT-LARGE-C , as always noted.",
"We report ROC AUC , because both datasets are heavily unbalanced, with toxic comments being rare (Fig. 2).",
"9 A first observation from Table 5 is that the best results are those of PERSPECTIVE , BERT-CCTK , and their context-aware variants (last four rows).",
"9 Recall that we also fix the bias term of the output neuron of each model (apart from PERSPECTIVE ) to log TN , to bias against the majority class.",
"We also tried under-sampling to address class imbalance, but this technique worked best.",
"This is not surprising, since these systems were trained (fine-tuned in the case of BERT-CCTK ) on much larger toxicity datasets than the other systems (upper two zones of Table 5), and BERT-CCTK was also pre-trained on even larger corpora.",
"What is more surprising is that any kind of information about the context does not lead to any consistent (or large) improvement in system performance .",
"PERSPECTIVE and BERT-CCTK seem to improve slightly with the naive context-awareness mechanism of concatenating the parent and target text during testing, but the improvement is very small and we did not detect a statistically significant difference.",
"10 Training with gold labels obtained from annotators that had access to context (@ C models) also leads to no consistent (or large) gain, compared to training with gold labels obtained out of context (@ N models).",
"This is probably due to the fact that context-sensitive comments are few (5.2% in the experiments on CAT-SMALL ) and, hence, any noise introduced by using gold labels obtained out of context does not significantly affect the performance of the models.",
"There was also no consistent (or large) improvement when encoding the parent comments with a BILSTM ( CA-BILSTM-BILSTM , CA-BILSTM-BERT ) or directly as in BERT 's next sentence prediction pre-training task ( CA-SEP-BERT ).",
"This is again probably a consequence of the fact that context-sensitive comments are few.",
"The small number of context-sensitive comments does not allow the BILSTMand BERT -based classifiers to learn how to use the context encodings to cope with 10 We used single-tailed stratified shuffling (Dror et al., 2018; Smucker et al., 2007), P < 0 .",
"01 , 10,000 repetitions, 50% swaps in each repetition.",
"context-sensitive comments, and failing to cope with context-sensitive comments does not matter much during testing, again since context-sensitive comments are so few.",
"We conclude for our second research question ( RQ 2) that we found no evidence that context actually improves the performance of toxicity classifiers, having tried both simple ( BILSTM ) and more powerful classifiers ( BERT ), having experimented with several methods to make the classifiers context aware, and having also considered the effect of gold labels obtained out of context vs. gold labels obtained by showing context to annotators.",
"We investigated the role of context in detecting toxicity in online comments.",
"We collected and share two datasets for investigating our research questions around the effect of context on the annotation of toxic comments ( RQ 1) and its detection by automated systems ( RQ 2).",
"We showed that context does have a statistically significant effect on toxicity annotation, but this effect is seen in only a narrow slice ( 5 . 2% ) of the (first) dataset.",
"We also found no evidence that context actually improves the performance of toxicity classifiers, having tried both simple and more powerful classifiers, having experimented with several methods to make the classifiers context aware, and having also considered the effect of gold labels obtained out of context vs. gold labels obtained by showing context to the annotators.",
"The lack of improvement in system performance seems to be related to the fact that context-sensitive comments are infrequent, at least in the data we collected.",
"A limitation of our work is that we considered a narrow contextual context, comprising only the previous comment and the discussion title.",
"11 It would be interesting to investigate in future work ways to improve the annotation quality when more comments in the discussion thread are provided, and also if our findings hold when broader context is considered (e.g., all previous comments in the thread, or the topic of the thread as represented by a topic model).",
"Another limitation of our work is that we used randomly sampled comments.",
"The effect of context may be more significant in conversations about particular topics, or for particular conversational tones (e.g. sarcasm), or when they reference communities that are frequently the target of online abuse.",
"Our experiments and datasets provide an initial foundation to investigate these important directions.",
"We thank the anonymous reviewers for their comments.",
"This research was funded in part by Google."
] | [
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"objective",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"method",
"method",
"objective",
"method",
"abstain",
"objective",
"other",
"other"
] |
[
"Approaches to computational argumentation tasks such as stance detection and aspect detection have largely focused on the text of individual claims, losing out on potentially valuable context from the broader collection of text.",
"We present a general approach to these tasks motivated by syntopical reading, a reading process that emphasizes comparing and contrasting viewpoints in order to improve topic understanding.",
"To capture collection-level context, we introduce the syntopical graph , a data structure for linking claims within a collection.",
"A syntopical graph is a typed multi-graph where nodes represent claims and edges represent different possible pairwise relationships, such as entailment, paraphrase, or support.",
"Experiments applying syntopical graphs to stance detection and aspect detection demonstrate state-of-the-art performance in each domain, significantly outperforming approaches that do not utilize collection-level information.",
"Collections of text about the same topic such as news articles and research reports often present a variety of viewpoints.",
"Adler and Van Doren (1940) proposed a formalized manual process for understanding a topic based on multiple viewpoints in their book, How to Read a Book , applying dialectics to collection browsing.",
"This process consists of four levels of reading, the highest of which is syntopical reading .",
"Syntopical reading is focused on understanding a core concept by reading a collection of works.",
"It requires finding passages on the (cid:63) Work done while interning at Adobe Research.",
"core concept that agree or disagree with each other, defining the issues, and analyzing the discussion to gain a better understanding of the core concept.",
"The goal of the paper at hand is to operationalize the syntopical reading process computationally in order to help individuals make sense of a collection of documents for a given topic.",
"Viewed through the lens of computational argumentation, these documents state claims or conclusions that can be grouped by the aspects of the topic they discuss as well as by the stance they convey towards the topic (Stede and Schneider, 2018).",
"An individual aiming to form a thorough understanding of the topic needs to get an overview of these viewpoints and their interactions.",
"This may be hard even if adequate tool support for browsing the collection is available (Wachsmuth et al., 2017a; Stab et al., 2018; Chen et al., 2019).",
"We seek to enable systems that are capable of reconstructing viewpoints within a collection, where a viewpoint is expressed as a triple V = ( topic , aspect , stance ) .",
"We consider the argumentative unit of a claim to be the minimal expression of a viewpoint in natural language, such that a single viewpoint can have many claims expressing it.",
"As an example, consider the following two claims: Nuclear energy emits zero CO 2 . Nuclear can provide a clean baseload, eliminating the need for fracking and coal mining.",
"Within a collection these claims express: V = ( Nuclear Energy , env. impact , PRO ) The goal of the systems we envision is thus to identify, group, and summarize the latent view-Viewpoints ( topic , aspect , stance ) Syntopical Graph Construction Pairwise judgements are used to as edges in a typed multigraph , where claims and documents are the nodes.",
"points underlying the claims in a collection, such that a reader can investigate and engage with them.",
"Many existing approaches attempt to identify viewpoints within a collection largely from the text of individual claims only, which we refer to as content-only approaches.",
"However, as the latent viewpoints are a global property of a collection, it is necessary to account not only for the text but also its context.",
"For instance, in order to identify the stance of a claim with respect to a topic, it may help to consider the claim's stance relative to other claims on the topic.",
"Although a few researchers have accounted for connections between claims and other information (details in Section 2), no systematic model of their interactions exists yet.",
"We therefore introduce a syntopical graph that models pairwise textual relationships between claims in order to enable a better reconstruction of the latent viewpoints in a collection.",
"In line with the idea of Adler and Van Doren (1940), the syntopical graph makes the points of agreement and disagreement within the collection explicit.",
"Technically, it denotes a multi-graph (where a pair of nodes can have many typed edges) that simultaneously represents relationships such as relative stance, relative specificity, or whether a claim paraphrases another.",
"We build syntopical graphs by transferring pretrained pairwise models, requiring no additional training data to be annotated.",
"We decompose the problem of viewpoint reconstruction into the subtasks of stance detection and aspect detection , and evaluate the benefits of syntopical graphs which are a collection-level approach on both tasks.",
"For stance detection, we use the sentential argumentation mining collection (Stab et al., 2018) and the IBM claim stance dataset (Bar-Haim et al., 2017a).",
"For aspect detection we use the argument frames collection (Ajjour et al., 2019).",
"We treat the graph as an input to:",
"(a) a graph neural network architecture for stance detection, and",
"(b) graph algorithms for unsupervised tasks such as aspect clustering.",
"In both settings, our results show that the syntopical graph approach improves significantly over content-only baselines.",
"The contributions of the work are two-fold: 1. A well-motivated data structure for capturing the latent structure of an argumentative corpus, the syntopical graph.",
"2. An instantiation of syntopical graphs that yields state-of-the-art results on stance detection and aspect detection.",
"First attempts at stance detection used content-oriented features (Somasundaran and Wiebe, 2009).",
"Later approaches, such as those by Ranade et al. (2013) and Hasan and Ng (2013), exploited common patterns in dialogic structure to improve stance detection.",
"More tailored to argumentation, Bar-Haim et al. (2017a) first identified the aspects of a discussed topic in two related claims and the sentiment towards these aspects.",
"From this information, they derived stance based on the contrastiveness of the aspects.",
"Later, Bar-Haim et al. (2017b) modeled the context of a claim to account for cases without sentiment.",
"Our work follows up on and generalizes this idea, systematically incorporating implicit and explicit structure induced by the topics, aspects, claims, and participants in a debate.",
"In a similar vein, Li et al. (2018) embedded debate posts and authors jointly based on their interactions, in order to classify a post's stance towards the debate topic.",
"Durmus et al. (2019) encoded related pairs of claims using BERT to predict the stance and specificity of any claim in a complex structure of online debates.",
"However, neither of these exploited the full graph structure resulting from all the relations and interactions in a debate, which is the gap we fill in this paper.",
"Sridhar et al. (2015) model collective information about debate posts, authors, and their agreement and disagreement using probabilistic soft logic.",
"Whereas they are restricted to the structure available in a forum, our approach can in principle be applied to arbitrary collections of text.",
"We also tackle aspect detection, which may at first seem more content-oriented in nature.",
"Accordingly, previous research such as the works of Misra et al. (2015) and Reimers et al. (2019b) employed word-based features or contextualized word embeddings for topic-specific aspect clustering.",
"Ajjour et al. (2019), whose argument frames dataset we use, instead clustered aspects with Latent Semantic Analysis (LSA) and topic modeling.",
"But, in general, aspects might not be mentioned in a text explicitly.",
"Therefore, we follow these other approaches, treating the task as a clustering problem.",
"Unlike them, however, we do not model only the content and linguistic structure of texts, but we combine them with the debate structure.",
"Different types of argumentation graphs have been proposed, covering expert-stance information (Toledo-Ronen et al., 2016), basic argument and debate structure (Peldszus and Stede, 2015; Gemechu and Reed, 2019), specific effect relations (Al-Khatib et al., 2020; Kobbe et al., 2020), social media graphs (Aldayel and Magdy, 2019), and knowledge graphs (Zhang et al., 2020).",
"Our main focus is not learning to construct ground-truth graphs, but how to use an approximated graph to derive properties such as stance and aspect.",
"Our work resembles approaches that derive the relevance of arguments (Wachsmuth et al., 2017b) or their centrality and divisiveness in a discussion (Lawrence and Reed, 2017) from respective graphs.",
"Sawhney et al. (2020) used a neural graph attention network to classify speech stance based on a graph with texts, speakers, and topics as nodes.",
"While we also use a relational graph convolutional network for learning, the graph we propose captures implicit claim relations as well as explicit structure.",
"In addition, text-based graph neural models have been proposed to facilitate classification, such as TextGCN (Yao et al., 2019) as well as the followup work BertGCN (Lin et al., 2021).",
"These approaches build a graph over terms (using normalized mutual information for edge weights) as well as sentences and documents (using TF-IDF for edge weights) to improve sentenceor document-level classification.",
"Our work generalizes this approach, focusing on incorporating many edge types with different meanings, such as relative stance or relative specificity.",
"We compare our approach with a BertGCN baseline, and we ablate all considered edge types, in order to show the importance of capturing these different textual relationships.",
"Ultimately, we seek to facilitate understanding of the main viewpoints in a text collection.",
"Qiu and Jiang (2013) used clustering-based viewpoint discovery to study the impact of the interaction of topics and users in forum discussions.",
"Egan et al. (2016) used multi-document summarization techniques to mine and organize the main points in a debate, and Vilares and He (2017) mined the main topics and their aspects using a Bayesian model.",
"Bar-Haim et al. (2020) introduced the idea of key-point analysis, grouping arguments found in a collection by the viewpoint they reflect and summarizing each group to a salient keypoint.",
"While our graph-based analysis is likely to be suitable for finding keypoints, we instead focus on reconstructing latent viewpoints by grouping claims, leaving open the option to identify the key claims in future work as it would require manual evaluation.",
"We now introduce the concept of a syntopical graph .",
"The goal of our syntopical graph is to systematically model the salient interactions of all claims in a collection of documents.",
"Then, properties of claims (say, their stance towards a topic or the aspects they cover) can be assessed based not only on the content of the claim alone, but on the entirety of information available in their context.",
"To capture this context, we build a graph where documents and claims are nodes.",
"Edges between Claim: Nuclear energy emits zero CO2.",
"claims are constructed using pairwise scoring functions, such as pretrained natural language inference (NLI) models.",
"Claims may relate to each other in many different ways: they can support or refute each other, they can paraphrase each other, they can entail or contradict each other, they can be topically similar, etc.",
"We hypothesize that being able to account for these relationships helps computational argumentation tasks such as stance detection.",
"Intuitively, if it is known that claim",
"(a) refutes claim",
"(b), and claim",
"(b) has a positive stance to the topic, it seems more reasonable to believe that claim",
"(a) has a negative stance.",
"We can represent all of this with a graph if we allow multiple edges between nodes.",
"For instance, claims can have edges that label both relative agreement and relative specificity, as exemplified in the graph in Figure 2. The process of constructing a graph is shown in Figure 1. Technically, we capture this intuition as a typed multi-graph: typed in that the nodes have different types drawn from { document, claim } , and a multi-graph because multiple edges (of different types) are allowed between nodes.",
"We then formally define a syntopical graph as a labeled multigraph in terms of a 5-tuple G : G = ( N , E , N, E, l N , l E ) , where N is the alphabet of node types, E is the alphabet of edge types, N is the set of nodes, E is the set of multi-edges, l N : N N maps each node to its type, and l E : E E maps each edge to its type.",
"In the following, we show how to construct the graph and what each of its components look like.",
"E = E : claim E : document , where E : claim is the set of types of claim-claim edges and E : document is the set of types of claim-document edges.",
"Claim Nodes The central node type in a syntopical graph is a claim node.",
"A claim node represents a topically relevant claim in a collection.",
"By treating a claim as a node embedded in a graph, we can take advantage of rich graph structures to represent the context in which the claim occurs, such as the document the claim appears in or the claim's relationship with other claims.",
"Document Nodes In general, two claims from the same source are more likely to represent the same viewpoint than a pair of claims sampled randomly.",
"To capture this intuition, we allow claims from the same source to share information with each other via document nodes, which enables models to pool information about groups of claims and share the information amongst them.",
"Similar information about claims can be aggregated in the metadata node and broadcast out to all claims.",
"support each other, is one more specific than the other, etc.",
"Different tasks can make use of this information (e.g., a claim is likely to have a specific stance if other claims that support it have the same stance).",
"claim-document edges ( E : document ) allow groups of claims to share information with each other through common ancestors (e.g., claims in a document pro nuclear energy are somewhat likely to have a pro stance).",
"Any pair of nodes can have multiple edges of different types between them; a claim can both contradict and refute another claim, for instance.",
"Edge Weights An edge can have a real-valued weight associated with it on the range ( 1 , 1) , representing the strength of the connection.",
"The relative stance edge between a claim which strongly refutes another would receive a weight close to 1 .",
"For graph edges, we combine four pretrained models and two similarity measures.",
"The pretrained edge types are: relative stance and relative specificity from Durmus et al. (2019), paraphrase edges from Dolan et al. (2004); Morris et al. (2020), and natural language inference edges from Williams et al. (2018); Liu et al. (2019).",
"The edge weights are the confidence scores defined by weight ( u, v, r ) = p pos ( u,v ) p neg ( u,v ) , where u and v are claims, r is the relation type, and p pos ( u,v ) is the probability of a positive association between the claims (e.g., is a paraphrase or does entail), p neg ( u,v ) for a negative one.",
"For similarity-based edges, we use standard TF-IDF for term-based similarity and LDA for topic-based similarity (Blei et al., 2003), using cosine similarity as the edge weight.",
"The document-claim edges have a single type, contains , with an edge weight of 1. We compute each of the pairwise relationships for all pairs of claims that share the same topic, and then filter out edges using a threshold on the absolute value of the edge weight.",
"is tuned as a hyperparameter on a validation dataset for each task.",
"For node representations, we initialize the claim node representations with the output of a natural language inference model that predicts whether the claim entails the topic.",
"We initialize the document representations with a sentence vectorizer over the text of the document.",
"A viewpoint can be understood as a judgment of some aspect of a topic that conveys a stance towards the topic.",
"The goal of viewpoint reconstruction is to identify the set of viewpoints in a collection given a topic, starting with the claims.",
"An example of this process is shown on the right in Figure 1. To denote viewpoints, we borrow notation in line with the idea of aspect-based argument mining (Traut-mann, 2020), which in turn was inspired by aspect-based sentiment analysis.",
"In particular, we express a viewpoint as a triple V : V = ( topic , aspect , stance ) A claim is an expression of a viewpoint in natural language, and a single viewpoint can be expressed in several ways throughout a collection in many claims.",
"Aspects are facets of the broader argument around the topic.",
"While some actual claims may encode multiple viewpoints simultaneously, henceforth we consider each claim to encode one viewpoint for simplicity.",
"To tackle viewpoint reconstruction computationally, we decompose it into two sub-tasks, stance detection and aspect detection, along with a final grouping of claims with same aspect and stance.",
"Stance Detection Stance detection requires assigning a valence label to a claim with respect to a particular topic.",
"Though content-only baselines can work in many cases, there are also cases where the stance of a claim might only make sense in relation to a broader argument.",
"For example, the claim Nuclear power plants take 5 years to construct is difficult to assign a stance a priori.",
"However, in the context of other claims such as Solar farms often take less than 2 years to commission, it might be viewed as having a negative stance.",
"To exploit this additional contextual information, we use syntopical graphs as input to a graph neural network, in particular a Relational Graph Convolutional Network (R-GCN) (Schlichtkrull et al., 2018).",
"We treat stance detection as a supervised node classification task.",
"The goal is to output a prediction in the set { PRO , CON } for each claim node relative to a topic.",
"R-GCNs were developed to perform node classification and edge prediction for knowledge bases, which are also typed multi-graphs.",
"As such, the abstractions of the syntopical graph slot neatly into the abstractions of R-GCNs.",
"The input to an R-GCN is a weighted, typed multigraph with some initial node representation.",
"The network is made up of stacked relational graph convolutional layers; each layer computes a new set of node representations based on each node's neighborhood.",
"In effect, each layer combines the edge-type-specific representation of all of a node's neighbors with its own representation.",
"The representations are influenced by the node, and all of its neighbors, attenuated through the edge weight.",
"An R-GCN thus consumes a set of initial claim representations, transforms them through stacks of relational graph convolutional layers, and outputs a final set of node vectors, which are fed into a classifier to predict the claim stance.",
"Aspect Detection Following the work of Ajjour et al. (2019), we treat aspect detection as an unsupervised task.",
"As aspects are an open class, we use a community detection approach, modularity-based community detection (Clauset et al., 2004).",
"The key intuition of modularity-based community detection is that communities are graph partitions that have more edges within communities than across communities.",
"Modularity is a value assigned to a graph partition, which is higher when there are fewer edges across communities than within them; a modularity of 0 represents a random partition, while higher modularities indicate tighter communities.",
"The goal of modularity-based community detection is to maximize modularity by finding dense partitions.",
"This intuition works well for aspects in a syntopical graph claims that discuss a similar aspect are likely to have salient interactions.",
"As aspects themselves are independent of stance, the direction of the interactions (e.g., support or refute) does not matter, but their salience does.",
"To capture only the intensity of the interaction between two claims, we apply a transformation to signed collapse the multi-edges of a syntopical graph (de-noted SG ) to a positive-weighted graph ( G ): w G ( u, v ) = (cid:80) t E SG ( u, v, t ) | w SG ( u, v, t ) | (cid:80) t E SG ( u, v, t ) , where w G ( u, v ) is the weight between nodes u and v in the new graph G , SG ( u, v, t ) = 1 if an edge of type t exists between nodes u and v in the syntopical graph ( SG ), and w SG ( u, v, t ) is the edge weight for type t between nodes u and v in the syntopical graph.",
"This is equivalent to taking the average across types of the absolute values of the weights.",
"The newly constructed single-edge graph is then used to identify aspects, which should have more interactions between them than across them.",
"To evaluate the effectiveness of our approach at reconstructing viewpoints, we consider three datasets across the two subtasks of stance and aspect detection.",
"We hypothesize that syntopical graph approaches will outperform content-only baselines including the ones used to initialize the claim representations because they are able to make use of not only the claim content, but also the claim context.",
"We further hypothesize that syntopical graph approaches will outperform graph-based baselines that use only textual similarity edges, because the latter's claim context is not as rich.",
"For our experiments, we construct a syntopical graph as described in Section 3.",
"We further evaluate our model by conducting several additional experiments, including removing the use of document nodes or initial claim representations, analyzing the performance of each edge type in isolation and when left out, and an analysis of the differences in predictions between the syntopical graph and the content-only baselines.",
"Stance Detection For the stance detection experiments, we use two datasets: first, the heterogeneous cross-topic argumentation mining dataset (ArgMin) from Stab et al. (2018), and second, the claim-stance dataset (IBMCS) from Bar-Haim et al. (2017a).",
"The ArgMin dataset contains about 25k sentences from 400 documents across eight controversial topics, ranging from abortion to school uniforms.",
"Following Schiller et al. (2020), we filter only the claims, resulting in 11.1k claims.",
"The IBMCS dataset contains 2.4k claims across 55 topics.",
"We use the splits from Schiller et al. (2020), which ensure that the topics in the training and test sets are mutually exclusive.",
"Claims are given a stance label drawn from { PRO , CON } .",
"We evaluate using macro-averaged F 1 and accuracy.",
"We use a syntopical graph for each dataset as the input to a relational graph convolutional network (R-GCN), implemented in DGL (Wang et al., 2019) and PyTorch (Paszke et al., 2019).",
"For document node representations, we use a pretrained sentence transformer and concatenate all of the sentences as input (Reimers et al., 2019a).",
"For the claim node representations, we use a RoBERTa model pretrained on an NLI task (Liu et al., 2019) to encode both the claim and topic; the resulting vectors are fixed throughout training.",
"Aspect Detection For clustering-based aspect detection, we use the argument frames dataset from Ajjour et al. (2019).",
"The dataset contains roughly 11k sentences drawn from 465 different topics.",
"Each sentence has a specific aspect (or frame, in the original paper), drawn from a set of over a thousand possible aspects.",
"Following the authors, we evaluate with a clustering metric, b-cubed F 1 (Amigo et al., 2009).",
"We transform the graph as described in Section 4 to use as an input to modularity-based community detection, using of 0.6 tuned on held-out topics.",
"The main results for stance detection are shown in Table 1. The most important finding is that the fusion of signals from content and from structure done by our approach syntopical graph (R-GCN) outperforms the existing state-of-the-art (Schiller et al., 2020) for both the IBMCS dataset (83.40 macro F 1 , +5.68 absolute) and the ArgMin dataset (67.7 macro F 1 , +6.12 absolute).",
"The content-oriented RoBERTa Large NLI model and the structure-only syntopical graph have significantly reduced performance independently, emphasizing the complementarity of the two signals.",
"Our best network is the one which includes both claim and document node, except for the ArgMin dataset.",
"Aspect detection results are shown in Table 2. Our modularity approach outperforms the state-of-the-art (Ajjour et al., 2019) on the argument frames dataset (55.42 b-cubed F 1 , +8.41 absolute).",
"The remainder of this section investigates the robustness of the syntopical graph approach to stance and aspect detection: First, we analyze the contribution of each edge type, running experiments without and with only each edge type.",
"We also examine the accuracy of the edges in our graph when applied out of domain as well as analysis to understand the types of claims for which this model improves performance.",
"Edge Analysis We conducted an ablation study to analyze the usefulness of each considered edge type.",
"To do so, we built graphs containing each edge independently, and graphs dropping each edge independently.",
"Table 3 presents the results.",
"For the supervised task of stance detection, we use the IBMCS dataset.",
"No single edge performs as well as the combination of edges, the best being relative stance with a macroF 1 score of 80.72.",
"This indicates that our model is capable of taking advantage of the different kinds of relationships represented by the edge types.",
"We see the largest performance drops when we remove relative stance (79.39), relative specificity (79.39), or NLI (78.95) edges respectively, indicating the highest amount of unique information being captured by these edges.",
"In contrast, paraphrase can be removed without loss for stance detection according to the results.",
"This is opposite for aspect detection, which we treat as an unsupervised community detection task; here paraphrase alone outperforms the graph with all edge relationships (macro F 1 56.31 versus 55.42).",
"The other edges even have a slight negative effect on the overall results (55.42); being unsupervised, our approach here has no way of filtering out uninformative edges.",
"Edge Domain Transfer One possible confounder of the contribution of each edge type is the out-of-domain performance of the pairwise model used to predict that edge.",
"A poor model would provide little more than random noise, even if the edge type were expected to be helpful.",
"To investigate this possibility, we sampled 100 each of the edges (above = 0 . 6 ) with the highest weight, the lowest weight, and a random sample.",
"We then annotated each edge as being correctly or incorrectly predicted.",
"Results are shown in Table 4. There is a clear trend that the edge weight is correlated with edge correctness, meaning that the models retain some level of calibration across domains.",
"As we incorporate the edge weight in the R-GCN, this helps to lessen the effect of the noisier, weaker edges.",
"Another trend is that an edge type's usefulness across tasks is not solely a function of that edge type's accuracy.",
"The type of failure mode is also important.",
"For instance, the relative stance edges have poor surface-level accuracy, but the most common failure was not predicting the wrong relative stance; it was predicting any stance for pairs of claims about different aspects.",
"Flip Analysis Finally, we analyze flipped cases in stance detection in which the baseline predicted stance incorrectly but the model predicted stance correctly, or vice-versa, to understand areas for which this model improves performance.",
"A sample of these is shown in Table 5. Perhaps the most surprising result is how different the predictions of the syntopical graph-based approach are from those of the content-only MT-DNN baseline.",
"For the IBMCS dataset, there were 1355 claims in the test set, and we flipped 219 (16.2%) correctly relative to the MT-DNN baseline, but also 140 (10.3%) incorrectly compared to that baseline.",
"Thus, we flipped 26.5% of the overall predictions for the 5.68 point improvement in F 1 .",
"This holds across the ArgMin dataset as well, where we flipped 536 (19.6%) claims correctly and 373 (13.7%) claims incorrectly, out of a total 2726 claims in the test set.",
"Though we show substantial gains overall, it seems that the models capture different signals.",
"We thus believe that future improvements through improved model combination may still be possible.",
"In this paper, we have introduced a data structure, the syntopical graph , which provides context for claims in collections.",
"We have provided empirical evidence that syntopical graphs can be used as input representations for graph-structured approaches Example True MT-DNN Syn.",
"(such as graph neural networks and graph clustering algorithms) to obtain significant improvements over content-only baselines.",
"We believe there are several opportunities to extend this work in the future.",
"First, we believe the graph construction could be improved by avoiding the inefficient pairwise analysis, expanding the edge types, and utilizing a more robust classifier for the graph.",
"Second, we would relax the constraint that a claim represents a single viewpoint, or the limitation of aspect detection to unsupervised approaches.",
"Finally, we would like to apply our approach to the original problem first motivated by syntopical reading to see if this system can aid users in browsing or understanding a collection.",
"We anticipate that the syntopical graph explored in this work will have a beneficial impact in real world systems to aid users in improved comprehension and reduce susceptibility to misinformation.",
"The goal of our work is motivated by syntopical reading, which theorizes that individuals exposed to agreement and disagreement within a collection gain a deeper understanding of the central topics.",
"Our work on syntopical graphs provides an algorithmic foundation to aid readers in understanding the key viewpoints (aspect and stance for a given topic) present in a collection.",
"We would like to thank many others for their invaluable feedback and patient discussions, including Charlotte Ellison, Ani Nenkova, Tong Sun, Han-Chin Shing, and Pedro Rodriguez.",
"This work was generously supported through Adobe Gift Funding, which supports an Adobe Research-University of Maryland collaboration.",
"It was completed while the primary author was interning at Adobe Research."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"method",
"objective",
"other",
"method",
"other",
"method",
"method",
"other",
"method",
"abstain",
"other",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"method",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Amr Sharaf University of Maryland [email protected]",
"Abstract",
"Imitation learning algorithms provide state-of-the-art results on many structured prediction tasks by learning near-optimal search policies.",
"Such algorithms assume training-time access to an expert that can provide the optimal action at any queried state; unfortunately, the number of such queries is often prohibitive, frequently rendering these approaches impractical.",
"To combat this query complexity, we consider an active learning setting in which the learning algorithm has additional access to a much cheaper noisy heuristic that provides noisy guidance.",
"Our algorithm, LEAQI, learns a difference classifier that predicts when the expert is likely to disagree with the heuristic, and queries the expert only when necessary.",
"We apply LEAQI to three sequence labeling tasks, demonstrating significantly fewer queries to the expert and comparable (or better) accuracies over a passive approach.",
"Structured prediction methods learn models to map inputs to complex outputs with internal dependencies, typically requiring a substantial amount of expert-labeled data.",
"To minimize annotation cost, we focus on a setting in which an expert provides labels for pieces of the input, rather than the complete input (e.g., labeling at the level of words, not sentences).",
"A natural starting point for this is imitation learning-based learning to search approaches to structured prediction (Daum et al., 2009; Ross et al., 2011; Bengio et al., 2015; Leblond et al., 2018).",
"In imitation learning, training proceeds by incrementally producing structured outputs on piece at a time and, at every step, asking the expert what would you do here? and learning to mimic that choice.",
"This interactive model comes at a substantial cost: the expert demonstrator must be continuously available and must be able to answer a potentially large number of queries.",
"We reduce this annotation cost by only asking an expert for labels that are truly needed; our algorithm, Learning to Query for Imitation (LEAQI, / \"li:,tSi: /) 1 achieves this by capitalizing on two factors.",
"First, as is typical in active learning (see 2), LEAQI only asks the expert for a label when it is uncertain.",
"Second, LEAQI assumes access to a noisy heuristic labeling function (for instance, a rule-based model, dictionary, or inexpert annotator) that can provide low-quality labels.",
"LEAQI operates by always asking this heuristic for a label, and only querying the expert when it thinks the expert is likely to disagree with this label.",
"It trains, simultaneously, a difference classifier (Zhang and Chaudhuri, 2015) that predicts disagreements between the expert and the heuristic (see Figure 1).",
"The challenge in learning the difference classifier is that it must learn based on one-sided feedback: if it predicts that the expert is likely to agree with the heuristic, the expert is not queried and the classifier cannot learn that it was wrong.",
"We address this one-sided feedback problem using the Apple Tasting framework (Helmbold et al., 2000), in which errors (in predicting which apples are tasty) are only observed when a query is made (an apple is tasted).",
"Learning in this way particularly important in the general case where the heuristic is likely not just to have high variance with respect to the expert, but is also statistically biased.",
"Experimentally (4.5), we consider three structured prediction settings, each using a different type of heuristic feedback.",
"We apply LEAQI to: English named entity recognition where the heuristic is a rule-based recognizer using gazetteers (Khashabi et al., 2018); English scientific keyphrase extraction, where the heuristic is an unsupervised method (Florescu and Caragea, 2017); and Greek part-of-speech tagging, where the heuristic is a small dictio-1 Code is available at: https://github.com/xkianteb/leaqi After completing his Ph.D.",
"nary compiled from the training data (Zesch et al., 2008; Haghighi and Klein, 2006).",
"In all three settings, the expert is a simulated human annotator.",
"We train LEAQI on all three tasks using fixed BERT (Devlin et al., 2019) features, training only the final layer (because we are in the regime of small labeled data).",
"The goal in all three settings is to minimize the number of words the expert annotator must label.",
"In all settings, we're able to establish the efficacy of LEAQI, showing that it can indeed provide significant label savings over using the expert alone and over several baselines and ablations that establish the importance of both the difference classifier and the Apple Tasting paradigm.",
"We review first the use of imitation learning for structured prediction, then online active learning, and finally applications of active learning to structured prediction and imitation learning problems.",
"The learning to search approach to structured prediction casts the joint prediction problem of producing a complex output as a sequence of smaller classification problems (Ratnaparkhi, 1996; Collins and Roark, 2004; Daum et al., 2009).",
"For instance, in the named entity recognition example from Figure 1, an input sentence x is labeled one word at a time, left-to-right.",
"At the depicted state ( s 10 ), the model has labeled the first nine words and must next label the tenth word.",
"Learning to search approaches assume access to an oracle policy (cid:63) , which provides the optimal label at every position.",
"In (interactive) imitation learning, we aim to imitate the behavior of the expert policy, (cid:63) , which provides the true labels.",
"The learning to search view allows us to cast structured prediction as a (degenerate) imitation learning task, where states Algorithm 1 DAgger ( , N, (cid:104) i (cid:105) Ni =0 , (cid:63) ) 1: initialize dataset D = {} 2: initialize policy 1 to any policy in 3: for i = 1 . . . N do 4: (cid:46) stochastic mixture policy 5: Let i = i (cid:63) + (1 i ) i 6: Generate a T -step trajectory using i 7: Accumulate data D D { ( s, (cid:63) ( s )) } for all s in those trajectories 8: Train classifier i +1 on D 9: end for 10: return best (or random) i are (input, prefix) pairs, actions are operations on the output, and the horizon T is the length of the sequence.",
"States are denoted s S , actions are denoted a [ K ] , where [ K ] = { 1 , . . . , K } , and the policy class is denoted [ K ] S .",
"The goal in learning is to find a policy with small loss on the distribution of states that it, itself, visits.",
"A popular imitation learning algorithm, DAgger (Ross et al., 2011), is summarized in Alg 1. In each iteration, DAgger executes a mixture policy and, at each visited state, queries the expert's action.",
"This produces a classification example, where the input is the state and the label is the expert's action.",
"At the end of each iteration, the learned policy is updated by training it on the accumulation of all generated data so far.",
"DAgger is effective in practice and enjoys appealing theoretical properties; for instance, if the number of iterations N is O ( T 2 log(1 / )) then with probability at least 1 , the generalization error of the learned policy is O (1 /T ) (Ross et al., 2011, Theorem 4.2).",
"Active learning has been considered since at least the 1980s often under the name selective sam-pling",
"sam-pling (Rendell, 1986; Atlas et al., 1990).",
"In agnostic online active learning for classification, a learner operates in rounds (e.g. Balcan et al., 2006; Beygelzimer et al., 2009, 2010).",
"At each round, the learning algorithm is presented an example x and must predict a label; the learner must decide whether to query the true label.",
"An effective margin-based approach for online active learning is provided by Cesa-Bianchi et al. (2006) for linear models.",
"Their algorithm defines a sampling probability = b/ ( b + z ) , where z is the margin on the current example, and b > 0 is a hyperparameter that controls the aggressiveness of sampling.",
"With probability , the algorithm requests the label and performs a perceptron-style update.",
"Our approach is inspired by Zhang and Chaud-huri's (2015) setting, where two labelers are available: a free weak labeler and an expensive strong labeler.",
"Their algorithm minimizes queries to the strong labeler, by learning a difference classifier that predicts, for each example, whether the weak and strong labelers are likely to disagree.",
"Their algorithm trains this difference classifier using an example-weighting strategy to ensure that its Type II error is kept small, establishing statistical consistency, and bounding its sample complexity.",
"This type of learning from one-sided feedback falls in the general framework of partial-monitoring games , a framework for sequential decision making with imperfect feedback.",
"Apple Tasting is a type of partial-monitoring game (Little-stone and Warmuth, 1989), where, at each round, a learner is presented with an example x and must predict a label y { 1 , +1 } .",
"After this prediction, the true label is revealed only if the learner predicts +1 .",
"This framework has been applied in several settings, such as spam filtering and document classification with minority class distributions (Sculley, 2007).",
"Sculley (2007) also conducts a through comparison of two methods that can be used to address the one-side feedback problem: label-efficient online learning (Cesa-Bianchi et al., 2006) and margin-based learning (Vapnik, 1982).",
"In the context of structured prediction for natural language processing, active learning has been considered both for requesting full structured outputs (e.g. Thompson et al., 1999; Culotta and McCallum, 2005; Hachey et al., 2005) and for requesting only pieces of outputs (e.g. Ringger et al.,",
"2007; Bloodgood and Callison-Burch, 2010).",
"For sequence labeling tasks, Haertel et al. (2008) found that labeling effort depends both on the number of words labeled (which we model), plus a fixed cost for reading (which we do not).",
"In the context of imitation learning, active approaches have also been considered for at least three decades, often called learning with an external critic and learning by watching (Whitehead, 1991).",
"More recently, Judah et al. (2012) describe RAIL , an active learning-for-imitation-learning algorithm akin to our ACTIVEDAGGER baseline, but which in principle would operate with any underlying i.i.d. active learning algorithm (not just our specific choice of uncertainty sampling).",
"Our goal is to learn a structured prediction model with minimal human expert supervision, effectively by combining human annotation with a noisy heuristic.",
"We present LEAQI to achieve this.",
"As a concrete example, return to Figure 1: at s 10 , must predict the label of the tenth word.",
"If is confident in its own prediction, LEAQI can avoid any query, similar to traditional active learning.",
"If is not confident, then LEAQI considers the label suggested by a noisy heuristic (here: ORG ).",
"LEAQI predicts whether the true expert label is likely to disagree with the noisy heuristic.",
"Here, it predicts no disagreement and avoids querying the expert.",
"Our algorithm, LEAQI, is specified in Alg 2. As input, LEAQI takes a policy class , a hypothesis class H for the difference classifier (assumed to be symmetric and to contain the constant one func-tion), a number of episodes N , an expert policy (cid:63) , a heuristic policy h , and a confidence parameter b > 0 .",
"The general structure of LEAQI follows that of DAgger, but with three key differences:",
"(a) roll-in (line 7) is according to the learned policy (not mixed with the expert, as that would require additional expert queries),",
"(b) actions are queried only if the current policy is uncertain at s (line 12), and",
"(c) the expert (cid:63) is only queried if it is predicted to disagree with the heuristic h at s by the difference classifier, or if apple tasting method switches the difference classifier label (line 15; see 3.2).",
"In particular, at each state visited by i , LEAQI estimates z , the certainty of i 's prediction at that state (see 3.3).",
"A sampling probability is set to b/ ( b + z ) where z is the certainty, and so if the model is very uncertain then tends to zero, following (Cesa-Bianchi et al., 2006).",
"With probability , LEAQI will collect some label.",
"When a label is collected (line 12), the difference classifier h i is queried on state s to predict if (cid:63) and h are likely to disagree on the correct action.",
"(Recall that h 1 always predicts disagreement per line 4.)",
"The difference classifier's prediction, d i , is passed to an apple tasting method in line 15.",
"Intuitively, most apple tasting procedures (including the one we use, STAP; see 3.2) return d i , unless the difference classifier is making many Type II errors, in which case it may return d i .",
"A target action is set to h ( s ) if the apple tast-Algorithm 3 AppleTaste_STAP ( S, a h i , d i ) 1: (cid:46) count examples that are action a h i 2: let t = (cid:80) ( _ ,a, _ , _ ) S 1 [ a h i = a ] 3: (cid:46) count mistakes made on action a h i 4: let m = (cid:80) ( _ ,a, d,d ) S 1 [ d (cid:54) = d a h i = a ] 5: w = t | S | (cid:46) percentage of time a h i was seen 6: if w < 1 then 7: (cid:46) skew distribution 8: draw r Beta (1 w, 1) 9: else 10: draw r Uniform (0 , 1) 11: end if 12: return ( d = 1) ( r (cid:112) ( m + 1) /t ) ing algorithm returns agree (line 17), and the expert (cid:63) is only queried if disagreement is predicted (line 20).",
"The state and target action (either heuristic or expert) are then added to the training data.",
"Finally, if the expert was queried, then a new item is added to the difference dataset, consisting of the state, the heuristic action on that state, the difference classifier's prediction, and the ground truth for the difference classifier whose input is s and whose label is whether the expert and heuristic actually disagree.",
"Finally, i +1 is trained on the accumulated action data, and h i +1 is trained on the difference dataset (details in 3.3).",
"There are several things to note about LEAQI: (cid:5) If the current policy is already very certain, a expert annotator is never queried.",
"(cid:5)",
"If a label is queried, the expert is queried only if the difference classifier predicts disagreement with the heuristic, or the apple tasting procedure flips the difference classifier prediction.",
"(cid:5)",
"Due to apple tasting, most errors the difference classifier makes will cause it to query the expert unnecessarily; this is the safe type of error (increasing sample complexity but not harming accuracy), versus a Type II error (which leads to biased labels).",
"(cid:5)",
"The difference classifier is only trained on states where the policy is uncertain, which is exactly the distribution on which it is run.",
"The difference classifier h H must be trained (line 27) based on one-sided feedback (it only observes",
"observes errors when it predicts disagree) to minimize Type II errors (it should only very rarely predict agree when the truth is disagree).",
"This helps keep the labeled data for the learned policies unbiased.",
"The main challenge here is that the feedback to the difference classifier is one-sided : that is, if it predicts disagree then it gets to see the truth, but if it predicts agree it never finds out if it was wrong.",
"We use one of (Helmbold et al., 2000)'s algorithms, STAP (see Alg 3), which works by random sampling from apples that are predicted to not be tasted and tasting them anyway (line 12).",
"Formally, STAP tastes apples that are predicted to be bad with probability (cid:112) ( m + 1) /t , where m is the number of mistakes, and t is the number of apples tasted so far.",
"We adapt Apple Tasting algorithm STAP to our setting for controlling the number of Type II errors made by the difference classifier as follows.",
"First, because some heuristic actions are much more common than others, we run a separate apple tasting scheme per heuristic action (in the sense that we count the number of error on this heuristic action rather than globally).",
"Second, when there is significant action imbalance 2 we find it necessary to skew the distribution from STAP more in favor of querying.",
"We achieve this by sampling from a Beta distribution (generalizing the uniform), whose mean is shifted toward zero for more frequent heuristic actions.",
"This increases the chance that Apple Tasting will have on finding bad apples error for each action (thereby keeping the false positive rate low for predicting disagreement).",
"In step 11, LEAQI must estimate the certainty of i on s .",
"Following Cesa-Bianchi et al. (2006), we implement this using a margin-based criteria.",
"To achieve this, we consider as a function that maps actions to scores and then chooses the action with largest score.",
"The certainty measure is then the difference in scores between the highest and second highest scoring actions: certainty ( , s ) = max a ( s, a ) max a (cid:48) (cid:54) = a ( s, a (cid:48) ) 2 For instance, in named entity recognition, both the heuristic and expert policies label the majority of words as O (not an entity).",
"As a result, when the heuristic says O , it is very likely that the expert will agree.",
"However, if we aim to optimize for something other than accuracylike F1it is precisely these disagreements that we need to find.",
"Theoretically, the main result for LEAQI is an interpretation of the main DAgger result(s).",
"Formally, let d denote the distribution of states visited by , C ( s, a ) [0 , 1] be the immediate cost of performing action a in state s , C ( s ) = E a ( s ) C ( s, a ) , and the total expected cost of to be J ( ) = T E s d C ( s ) , where T is the length of trajectories.",
"C is not available to a learner in an imitation setting; instead the algorithm observes an expert and minimizes a surrogate loss (cid:96) ( s, ) (e.g., (cid:96) may be zero/one loss between and (cid:63) ).",
"We assume (cid:96) is strongly convex and bounded in [0 , 1] over .",
"Given this setup assumptions, let (cid:15) pol-approx = min 1 N (cid:80) Ni =1 E s d i (cid:96) ( s, ) be the true loss of the best policy in hindsight, let (cid:15) dc-approx = min h H 1 N (cid:80) Ni =1 E s d i err ( s, h, (cid:63) ( s ) (cid:54) = h ( s )) be the true error of the best difference classifier in hindsight, and assuming that the regret of the policy learner is bounded by reg pol ( N ) after N steps, Ross et al. (2011) shows the following 3 : Theorem 1 (Thm 4.3 of Ross et al. (2011)) .",
"After N episodes each of length T , under the assumptions above, with probability at least 1 there exists a policy 1: N such that: E s d (cid:96) ( s, ) (cid:15) pol-approx + reg pol ( N ) + (cid:112) (2 /N ) log(1 / ) This holds regardless of how 1: N are trained (line 26).",
"The question of how well LEAQI performs becomes a question of how well the combination of uncertainty-based sampling and the difference classifier learn.",
"So long as those do a good job on their individual classification tasks, DAgger guarantees that the policy will do a good job.",
"This is formalized below, where Q (cid:63) ( s, a ) is the best possible cumulative cost (measured by C ) starting in state s and taking action a : Theorem 2 (Theorem 2.2 of Ross et al. (2011)) .",
"Let u be such that Q (cid:63) ( s, a ) Q (cid:63) ( s, (cid:63) ( s )) u for all a and all s with d ( s ) > 0 ; then for some 1: N , as N : J ( ) J ( (cid:63) ) + uT (cid:15) pol-approx Here, u captures the most long-term impact a single decision can have; for example, for average Hamming loss, it is straightforward to see that u = 1 T 3 Proving a stronger result is challenging: analyzing the sample complexity of an active learning algorithm that uses a difference classifiereven in the non-sequential settingis quite involved (Zhang and Chaudhuri, 2015).",
"because any single mistake can increase the number of mistakes by at most 1 .",
"For precision, recall and F-score, u can be as large as one in the (rare) case that a single decision switches from one true positive to no true positives.",
"The primary research questions we aim to answer experimentally are:",
"Q1 Does uncertainty-based active learning achieve lower query complexity than passive learning in the learning to search settings?",
"Q2 Does learning a difference classifier improve query efficiency over active learning alone?",
"Q3 Does Apple Tasting successfully handle the problem of learning from one-sided feedback?",
"Q4 Is the approach robust to cases where the noisy heuristic is uncorrelated with the expert?",
"Q5 Is casting the heuristic as a policy more effective than using its output as features?",
"To answer these questions, we conduct experiments on three tasks (see Table 1): English named entity recognition, English scientific keyphrase extraction, and low-resource part of speech tagging on Modern Greek (el), selected as a low-resource setting.",
"In order to address the research questions above, we compare LEAQI to several baselines.",
"The baselines below compare our approach to previous methods: DAGGER .",
"ACTIVEDAGGER .",
"An active variant of DAgger that asks for labels only when uncertain.",
"(This is equivalent to LEAQI, but with neither the difference classifier nor apple tasting.)",
"The baselines and LEAQI share a linear relationship.",
"DAGGER is the baseline algorithm used by all algorithms described above but it is very query inefficient with respect to an expert annotator.",
"ACTIVEDAGGER introduces active learning to make DAGGER more query efficient; the delta to the previous addresses Q1.",
"LEA QI+N OAT introduces the difference classifier; the delta addresses Q2.",
"LEAQI adds apple tasting to deal with one-sided learning; the delta addresses Q3.",
"Finally, LEA QI+N OISYHEUR .",
"(vs LEAQI) addresses Q4 and the +F EAT variants address Q5.",
"For named entity recognition , we use training, validation, and test data from CoNLL'03 (Tjong Kim Sang and De Meulder, 2003), consisting of IO tags instead of BIO tags (the B tag is almost never used in this dataset, so we never attempt to predict it) over four entity types: Person, Organization, Location, and Miscellaneous.",
"For part of speech tagging , we use training and test data from modern Greek portion of the Universal Dependencies (UD) treebanks (Nivre, 2018), consisting of 17 universal tags 4 .",
"For keyphrase extraction , we use training, validation, and test data from SemEval 2017 Task 10 (Augenstein et al., 2017), consisting of IO tags (we use one I tag for all three keyphrase types).",
"In all tasks, we implement both the policy and difference classifier by fine-tuning the last layer of a BERT embedding representation (Devlin et al., 2019).",
"More specifically, for a sentence of length T , w 1 , . . . , w T , we first compute BERT embeddings for each word, x 1 , . . . , x T using the appropriate BERT model: English BERT and M-BERT 5 for named entity and part-of-speech, respectively, and SciBERT (Beltagy et al., 2019) for keyphrase extraction.",
"We then represent the state at position t by concatenating the word embedding at that position with a one-hot representation of the previous action: s t = [ w t ; onehot ( a t 1 )] .",
"This feature representation is used both for learning the labeling policy and also learning the difference classifier.",
"In all experiments, the expert (cid:63) is a simulated human annotator who annotates one word at a time.",
"The expert returns the optimal action for the relevant evaluation metric (F-score for named entity recognition and keyphrase extraction, and accuracy for part-of-speech tagging).",
"We take the annotation cost to be the total number of words labeled.",
"The heuristic we implement for named entity recognition is a high-precision gazeteer-based string matching approach.",
"We construct this by taking a gazeteer from Wikipedia using the Cog-Comp framework (Khashabi et al., 2018), and use 4 ADJ, ADP, ADV, AUX, CCONJ, DET, INTJ, NOUN, NUM, PART, PRON, PROPN, PUNCT, SCONJ, SYM, VERB, X. 5 Multilingual BERT (Devlin et al., 2019) FlashText (Singh, 2017) to label the dataset.",
"This heuristic achieves a precision of 0 .",
"88 , recall of 0 .",
"27 and F-score of 0 .",
"41 on the training data.",
"The keyphrase extraction heuristic is the output of an unsupervised keyphrase extraction approach (Florescu and Caragea, 2017).",
"This system is a graph-based approach that constructs word-level graphs incorporating positions of all word occurrences information; then using PageRank to score the words and phrases.",
"This heuristic achieves a precision of 0 .",
"20 , recall of 0 .",
"44 and F-score of 0 .",
"27 on the training data.",
"The part of speech tagging heuristic is based on a small dictionary compiled from Wiktionary.",
"Following Haghighi and Klein (2006) and Zesch et al. (2008), we extract this dictionary using Wiktionary as follows: for word w in our training data, we find the part-of-speech y by querying Wiktionary.",
"If w is in Wikitionary, we convert the Wikitionary part of speech tag to a Universal Dependencies tag (see A.1), and if word w is not in Wiktionary, we use a default label of X.",
"Furthermore, if word w has multiple parts of speech, we select the first part of speech tag in the list.",
"The label X is chosen 90% of the time.",
"For the remaining 10% , the heuristic achieves an accuracy of 0 .",
"67 on the training data.",
"Our experimental setup is online active learning.",
"We make a single pass over a dataset, and the goal is to achieve an accurate system as quickly as possible.",
"We measure performance (accuracy or F-score) after every 1000 words ( 50 sentences) on held-out test data, and produce error bars by averaging across three runs and reporting standard deviations.",
"Hyperparameters for DAGGER are optimized using grid-search on the named entity recognition training data and evaluated on development data.",
"We then fix DAGGER hyperparameters for all other experiments and models.",
"The difference classifier hyperparameters are subsequently optimized in the same manner.",
"We fix the difference classifier hyperparameters for all other experiments.",
"6 4.5 Experimental Results The main results are shown in the top two rows of Figure 2; ablations of LEAQI are shown in Figure 3.",
"6 We note that this is a somewhat optimistic hyperparameter setting: in the real world, model selection for active learning is extremely challenging.",
"Details on hyperparameter selection and LEA QI's robustness across a rather wide range of choices are presented in A.2, A.3 and A.4 for keyphrase extraction and part of speech tagging.",
"In Figure 2, the top row shows traditional learning curves (performance vs number of queries), and the bottom row shows the number of queries made to the expert as a function of the total number of words seen.",
"Active vs Passive (Q1).",
"In all cases, we see that the active strategies improve on the passive strategies; this difference is largest in keyphrase extraction, middling for part of speech tagging, and small for NER.",
"While not surprising given previous successes of active learning, this confirms that it is also a useful approach in our setting.",
"As expected, the active algorithms query far less than the passive approaches, and LEAQI queries the least.",
"Heuristic as Features vs Policy (Q5).",
"We see that while adding the heuristic's output as a feature can be modestly useful, it is not uniformly useful and, at least for keyphrase extraction and part of speech tagging, it is not as effective as LEAQI.",
"For named entity recognition, it is not effective at all, but this is also a case where all algorithms perform essentially the same.",
"Indeed, here, LEAQI learns quickly with few queries, but never quite reaches the performance of ActiveDAgger.",
"This is likely due to the difference classifier becoming overly confident too quickly, especially on the O label, given the (relatively well known) oddness in mismatch between development data and test data on this dataset.",
"Difference Classifier Efficacy (Q2).",
"Turning to the ablations (Figure 3), we can address Q2 by comparing the ActiveDAgger curve to the LeaQI+NoAT curve.",
"Here, we see that on NER and keyphrase extraction, adding the difference classifier without adding apple tasting results in a far worse model: it learns very quickly but plateaus much lower than the best results.",
"The exception is part of speech tagging, where apple tasting does not seem necessary (but also does not hurt).",
"Overall, this essentially shows that without controlling Type II errors, the difference classifier on it's own does not fulfill its goals.",
"Apple Tasting Efficacy (Q3).",
"Also considering the ablation study, we can compare LeaQI+NoAT with LeaQI.",
"In the case of part of speech tagging, there is little difference: using apple tasting to combat issues of learning from one sided feedback neither helps nor hurts performance.",
"However, for both named entity recognition and keyphrase extraction, removing apple tasting leads to faster learning, but substantially lower final performance (accuracy or f-score).",
"This is somewhat expected: 0 20K 40K 60K number of words queried 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 p h r a s e l a b e l f s c o r e Named Entity Recognition LeaQI LeaQI+NoisyHeur.",
"Robustness to Poor Heuristic (Q4).",
"We compare LeaQI+NoisyHeur to ActiveDAgger.",
"Because the heuristic here is useless, the main hope is that it does not degrade performance below ActiveDAgger.",
"Indeed, that is what we see in all three cases: the difference classifier is able to learn quite quickly to essentially ignore the heuristic and only rely on the expert.",
"In this paper, we considered the problem of reducing the number of queries to an expert labeler for structured prediction problems.",
"We took an imitation learning approach and developed an algorithm, LEAQI, which leverages a source that has low-quality labels: a heuristic policy that is suboptimal but free.",
"To use this heuristic as a policy, we learn a difference classifier that effectively tells LEAQI when it is safe to treat the heuristic's action as if it were optimal.",
"We showed empirically across Named Entity Recognition, Keyphrase Extraction and Part of Speech Tagging tasksthat the active learning approach improves significantly on passive learning, and that leveraging a difference classifier improves on that.",
"1. In some settings, learning a difference classifier may be as hard or harder than learning the structured predictor; for instance if the task is binary sequence labeling (e.g., word segmentation), minimizing its usefulness.",
"2. The true labeling cost is likely more complicated than simply the number of individual actions queried to the expert.",
"Despite these limitations, we hope that LEAQI provides a useful (and relatively simple) bridge that can enable using rule-based systems, heuristics, and unsupervised models as building blocks for more complex supervised learning systems.",
"This is particularly attractive in settings where we have very strong rule-based systems, ones which often outperform the best statistical systems, like corefer-ence resolution (Lee et al., 2011), information extraction (Riloff and Wiebe, 2003), and morphological segmentation and analysis (Smit et al., 2014).",
"We thank Rob Schapire, Chicheng Zhang, and the anonymous ACL reviewers for very helpful comments and insights.",
"This material is based upon work supported by the National Science Foundation under Grant No. 1618193 and an ACM SIGHPC/Intel Computational and Data Science Fellowship to KB.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor of the ACM."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"result",
"other",
"other",
"other"
] |
[
"We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph.",
"At each time step, our model performs multiple rounds of attention, reasoning, and composition that aim to answer two critical questions: (1) which part of the input sequence to abstract; and (2) where in the output graph to construct the new concept.",
"We show that the answers to these two questions are mutually causalities.",
"We design a model based on iterative inference that helps achieve better answers in both perspectives, leading to greatly improved parsing accuracy.",
"Our experimental results significantly outperform all previously reported SMATCH scores by large margins.",
"Remarkably, without the help of any large-scale pre-trained language model (e.g., BERT), our model already surpasses previous state-of-the-art using BERT.",
"With the help of BERT, we can push the state-of-the-art results to 80.2% on LDC2017T10 (AMR 2.0) and 75.4% on LDC2014T12 (AMR 1.0).",
"Abstract Meaning Representation (AMR) (Ba-narescu et al., 2013) is a broad-coverage semantic formalism that encodes the meaning of a sentence as a rooted, directed, and labeled graph, where nodes represent concepts and edges represent relations (See an example in Figure 1).",
"AMR parsing is the task of transforming natural language text into AMR.",
"One biggest challenge of AMR parsing is the lack of explicit alignments between nodes (con-cepts) in the graph and words in the text.",
"This characteristic not only poses great difficulty in concept The work described in this paper is substantially supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Code: 14204418) and the Direct Grant of the Faculty of Engineering, CUHK (Project Code: 4055093).",
"prediction but also brings a close tie for concept prediction and relation prediction.",
"While most previous works rely on a pre-trained aligner to train a parser, some recent attempts include: modeling the alignments as latent variables (Lyu and Titov, 2018), attention-based sequence-to-sequence transduction models (Barzdins and Gosko, 2016; Konstas et al., 2017; van Noord and Bos, 2017), and attention-based sequence-to-graph transduction models (Cai and Lam, 2019; Zhang et al., 2019b).",
"Sequence-to-graph transduction models build a semantic graph incrementally via spanning one node at every step.",
"This property is appealing in terms of both computational efficiency and cognitive modeling since it mimics what human experts usually do, i.e., first grasping the core ideas then digging into more details (Banarescu et al., 2013; Cai and Lam, 2019).",
"Unfortunately, the parsing accuracy of existing works including recent state-of-the-arts (Zhang et al., 2019a,b) remain unsatisfactory compared to human-level performance, 1 especially in cases where the sentences are rather long and informative, which indicates substantial room for improvement.",
"One possible reason for the deficiency is the inherent defect of one-pass prediction process; that is, the lack of the modeling capability of the interactions between concept prediction and relation prediction, which is critical to achieving fully-informed and unambiguous decisions.",
"We introduce a new approach tackling AMR parsing, following the incremental sequence-to-graph transduction paradigm.",
"We explicitly characterize each spanning step as the efforts for finding which part to abstract with respect to the input sequence , and where to construct with respect to the partially constructed output graph .",
"Equivalently, 1 The average annotator vs. inter-annotator agreement (SMATCH ) was 0.83 for newswire and 0.79 for web text according to Banarescu et al. (2013).",
"we treat AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph.",
"Intuitively, the answer of what concept to abstract decides where to construct (i.e., the relations to existing concepts), while the answer of where to construct determines what concept to abstract.",
"Our proposed model, supported by neural networks with explicit structure for attention, reasoning, and composition, integrated with an iterative inference algorithm.",
"It iterates between finding supporting text pieces and reading the partially constructed semantic graph, inferring more accurate and harmonious expansion decisions progressively.",
"Our model is aligner-free and can be effectively trained with limited amount of labeled data.",
"Experiments on two AMR benchmarks demonstrate that our parser outperforms the previous best parsers on both benchmarks.",
"It achieves the best-reported SMATCH scores (F1): 80.2% on LDC2017T10 and 75.4% on LDC2014T12, surpassing the previous state-of-the-art models by large margins.",
"On a coarse-grained level, we can categorize existing AMR parsing approaches into two main classes: Two-stage parsing (Flanigan et al., 2014; Lyu and Titov, 2018; Zhang et al., 2019a) uses a pipeline design for concept identification and relation prediction, where the concept decisions precede all relation decisions; One-stage parsing constructs a parse graph incrementally.",
"For more fine-grained analysis, those one-stage parsing methods can be further categorized into three types: Transition-based parsing (Wang et al., 2016; Damonte et al., 2017; Ballesteros and Al-Onaizan, 2017; Peng et al., 2017; Guo and Lu, 2018; Liu et al., 2018; Wang and Xue, 2017; Naseem et al., 2019) processes a sentence from left-to-right and constructs the graph incrementally by alternately inserting a new node or building a new edge.",
"Seq2seq-based parsing (Barzdins and Gosko, 2016; Konstas et al., 2017; van Noord and Bos, 2017; Peng et al., 2018) views parsing as sequence-to-sequence transduction by some linearization of the AMR graph.",
"The concept and relation prediction are then treated equally with a shared vocabulary.",
"The third class is graph-based parsing (Cai and Lam, 2019; Zhang et al., 2019b), where at each time step, a new node along with its connections to existing nodes are jointly decided, either in order (Cai and Lam, 2019) or in parallel (Zhang et al., 2019b).",
"So far, the recip-The boy must not go The boy must not go obligate-01 boy go-02 ARG2 A RG 0 p o l a r i t y obligate-01 ?",
"rocal causation of relation prediction and concept prediction has not been closely-studied and well-utilized.",
"There are also some exceptions staying beyond the above categorization.",
"Peng et al. (2015) introduce a synchronous hyperedge replacement grammar solution.",
"Pust et al. (2015) regard the task as a machine translation problem, while Artzi et al. (2015) adapt combinatory categorical grammar.",
"Groschwitz et al. (2018); Lindemann et al. (2019) view AMR graphs as the structure AM algebra.",
"Our approach is inspired by the deliberation process when a human expert is deducing a semantic graph from a sentence.",
"The output graph starts from an empty graph and spans incrementally in a node-by-node manner.",
"At any time step of this process, we are distilling the information for the next expansion.",
"We call it expansion because the new node, as an abstract concept of some specific text fragments in the input sentence, is derived to complete some missing elements in the current semantic graph.",
"Specifically, given the input sentence and the current partially constructed graph, we are answering two critical questions: which part of the input sequence to abstract, and where in the output graph to construct the new concept.",
"For instance, Figure",
"1(a) and",
"(b) show two possible choices for the next expansion.",
"In Figure",
"1(a), the word boy is abstracted to the concept boy to complement the subject information of the event go-02 .",
"On the (Current Graph) (Input Sequence) The boy wants the girl to believe him.",
"other hand, in Figure",
"1(b), a polarity attribute of the event go-2 is constructed, which is triggered by the word not in the sentence.",
"We note that the answer to one of the questions can help answer the other.",
"For instance, if we have decided to render the word not to the graph, then we will consider adding an edge labeled as polarity , and finally determine its attachment to the existing event go-2 (rather than an edge labeled ARG0 to the same event go-2 , though it is also present in the golden graph).",
"On the other hand, if we have decided to find the subject ( ARG0 relation) of the action go-02 , we are confident to locate the word boy instead of function words like not or must, thus unambiguously predict the right concept boy .",
"Another possible circumstance is that we may make a mistake trying to ask something that is not present in the sentence (e.g., the destination of the go-02 action).",
"This attempt will be rejected by a review of the sentence.",
"The rationale is that literally we cannot find the destination information in the sentence.",
"Similarly, if we mistakenly propose to abstract some parts of the sentence that are not ready for construction yet, the proposal will be rejected by another inspection on the graph since that there is nowhere to place such a new concept.",
"We believe the mutual causalities, as described above, are useful for action disambiguation and harmonious decision making, which eventually result in more accurate parses.",
"We formulate AMR parsing as a series of dual graph-sequence decisions and design an iterative inference approach to tackle each of them.",
"It is sort of analogous to the cognition procedure of a person, who might first notice part of the important information in one side (graph or sequence), then try to confirm her decision at the other side, which could just refute her former hypothesis and propose a new one, and finally converge to a conclusion after multiple rounds of reasoning.",
"Formally, the parsing model consists of a series of graph expansion procedures { G 0 . . . G i . . . } , starting from an empty graph G 0 .",
"In each turn of expansion, the following iterative inference process is performed: y it = g ( G i , x it ) , x i t +1 = f ( W, y i t ) , where W, G i are the input sequence and the current semantic graph respectively.",
"g ( ) , f ( ) seek where to construct (edge prediction) and what to abstract (node prediction) respectively, and x it , y it are the t -th graph hypothesis (where to construct) and t -th sequence hypothesis (what to abstract) for the i -th expansion step respectively.",
"For clarity, we may drop the superscript i in the following descriptions.",
"Figure 2 depicts an overview of the graph-sequence iterative inference process.",
"Our model has four main components: (1) Sequence Encoder, which generates a set of text memories (per token) to provide grounding for concept alignment and abstraction; (2) Graph Encoder, which generates a set of graph memories (per node) to provide grounding for relation reasoning; (3) Concept Solver, where a previous graph hypothesis is used for concept prediction; and (4) Graph Solver, where a previous concept hypothesis is used for relation prediction.",
"The last two components correspond to the reasoning functions g ( ) and f ( ) respectively.",
"The text memories can be computed by Sentence Encoder at the beginning of the whole parsing while the graph memories are constructed by Graph Encoder incrementally as the parsing progresses.",
"During the iterative inference, a semantic representation of current state is used to attend to both graph and text memories (blue and red arrows) in order to locate the new concept and obtain its relations to the existing graph, both of which subsequently refine each other.",
"Intuitively, after a first glimpse of the input sentence and the current graph, specific sub-areas of both sequence and graph are revisited to obtain a better understanding of the current situation.",
"Later steps typically read the text in detail with specific learning aims, either confirm-ing or overturning a previous hypothesis.",
"Finally, after several iterations of reasoning steps, the refined sequence/graph decisions are used for graph expansion.",
"As mentioned above, we employ a sequence encoder to convert the input sentence into vector representations.",
"The sequence encoder follows the multi-layer Transformer architecture described in Vaswani et al. (2017).",
"At the bottom layer, each token is firstly transformed into the concatenation of features learned by a character-level convolutional neural network (charCNN, Kim et al., 2016) and randomly initialized embeddings for its lemma, part-of-speech tag, and named entity tag.",
"Additionally, we also include features learned by pre-trained language model BERT (Devlin et al., 2019).",
"2 Formally, for an input sequence w 1 , w 2 , . . . , w n with length n , we insert a special token BOS at the beginning of the sequence.",
"For clarity, we omit the detailed transformations (Vaswani et al., 2017) and denote the final output from our sequence encoder as { h 0 , h 1 , . . . , h n } R d , where h 0 corresponds the special token BOS and serves as an overall rep-2 We obtain word-level representations from pre-trained BERT in the same way as Zhang et al. (2019a,b), where subtoken representations at the last layer are averaged.",
"resentation while others are considered as contextualized word representations.",
"Note that the sequence encoder only needs to be invoked once, and the produced text memories are used for the whole parsing procedure.",
"We use a similar idea in Cai and Lam (2019) to encode the incrementally expanding graph.",
"Specifically, a graph is simply treated as a sequence of nodes (concepts) in the chronological order of when they are inserted into the graph.",
"We employ multi-layer Transformer architecture with masked self-attention and source-attention, which only allows each position in the node sequence to attend to all positions up to and including that position, and every position in the node sequence to attend over all positions in the input sequence.",
"3 While this design allows for significantly more parallelization during training and computation-saving incremen-tality during testing, 4 it inherently neglects the edge information.",
"We attempted to alleviate this problem by incorporating the idea of Strubell et al. (2018) that applies auxiliary supervision at attention heads to encourage them to attend to each node's parents in the AMR graph.",
"However, we did not see performance improvement.",
"We attribute the failure to the fact that the neural attention mechanisms on their own are already capable of learning to attend to useful graph elements, and the auxiliary supervision is likely to disturb the ultimate parsing goal.",
"Consequently, for the current graph G with m nodes, we take its output concept sequence c 1 , c 2 , . . . , c m as input.",
"Similar to the sequence encoder, we insert a special token BOG at the beginning of the concept sequence.",
"Each concept is firstly transformed into the concatenation of feature vector learned by a char-CNN and randomly initialized embedding.",
"Then, a multi-layer Transformer encoder with masked self-attention and source-attention is applied, resulting in vector representations { s 0 , s 1 , . . . , s m } R d , where s 0 represents the special concept BOG and serves as a dummy node while others are considered as contextualized node representations.",
"3 It is analogous to a standard Transformer decoder (Vaswani et al., 2017) for sequence-to-sequence learning.",
"4 Trivially employing a graph neural network here can be computationally expensive and intractable since it needs to re-compute all graph representations after every expansion.",
"At each sequence reasoning step t , the concept solver receives a state vector y t that carries the latest graph decision and the input sequence memories h 1 , . . . , h n from the sequence encoder, and aims to locate the proper parts in the input sequence to abstract and generate a new concept.",
"We employ the scaled dot-product attention proposed in Vaswani et al. (2017) to solve this problem.",
"Concretely, we first calculate an attention distribution over all input tokens: t = softmax (( WQ y t ) TWK h 1: n d k ) , where { WQ , WK } R d k d denote learnable linear projections that transform the input vectors into the query and key subspace respectively, and d k represents the dimensionality of the subspace.",
"The attention weights t R n provide a soft alignment between the new concept and the tokens in the input sequence.",
"We then compute the probability distribution of the new concept label through a hybrid of three channels.",
"First, t is fed through an MLP and softmax to obtain a probability distribution over a pre-defined vocabulary: MLP ( t ) = ( WV h 1: n ) t + y t (1) P (vocab) = softmax ( W (vocab) MLP ( t ) + b (vocab) ) , where WV R d d denotes the learnable linear projection that transforms the text memories into the value subspace, and the value vectors are averaged according to t for concept label prediction.",
"Second, the attention weights t directly serve as a copy mechanism (Gu et al., 2016; See et al., 2017),",
"i,e., the probabilities of copying a token lemma from the input text as a node label.",
"Third, to address the attribute values such as person names or numerical strings, we also use t for another copy mechanism that directly copies the original strings of input tokens.",
"The above three channels are combined via a soft switch to control the production of the concept label from different sources: [ p 0 , p 1 , p 2 ] = softmax ( W (switch) MLP ( t )) , where MLP is the same as in Eq.",
"1, and p 0 , p 1 and p 2 are the probabilities of three prediction channels respectively.",
"Hence, the final prediction probability of a concept c is given by: P ( c ) = p 0 P (vocab) ( c ) + p 1 ( (cid:88) i L ( c ) t [ i ]) + p 2 ( (cid:88) i T ( c ) t [ i ]) , where [ i ] indexes the i -th element and L ( c ) and T ( c ) are index sets of lemmas and tokens respectively that have the surface form as c .",
"At each graph reasoning step t , the relation solver receives a state vector x t that carries the latest concept decision and the output graph memories s 0 , s 1 , . . . , s m from the graph encoder, and aims to point out the nodes in the current graph that have an immediate relation to the new concept (source nodes) and generate corresponding edges.",
"Similar to Cai and Lam (2019); Zhang et al. (2019b), we factorize the task as two stages: First, a relation identification module points to some preceding nodes as source nodes; Then, the relation classifica-tion module predicts the relation type between the new concept and predicted source nodes.",
"We leave the latter to be determined after iterative inference.",
"AMR is a rooted, directed, and acyclic graph.",
"The reason for AMR being a graph instead of a tree is that it allows reentrancies where a concept participates in multiple semantic relations with different semantic roles.",
"Following Cai and Lam (2019), we use multi-head attention for a more compact parsing procedure where multiple source nodes are simultaneously determined.",
"5 Formally, our relation identification module employs H different attention heads, for each head h , we calculate an attention distribution over all existing node (including the dummy node s 0 ): ht = softmax ( ( W Qh x t ) TW Kh s 0: m d k ) .",
"Therefore, different heads may points to different nodes at the same time.",
"Intuitively, each head represents a distinct relation detector for a particular 5 This is different to Zhang et al. (2019b) where an AMR graph is converted into a tree by duplicating nodes that have reentrant relations.",
"set of relation types.",
"For each attention head, it will point to a source node if certain relations exist between the new node and the existing graph, otherwise it will point to the dummy node.",
"An example with four attention heads and three existing nodes (excluding the dummy node) is illustrated in Figure 3.",
"As described above, the concept solver and the relation solver are conceptually two attention mechanisms over the sequence and graph respectively, addressing the concept prediction and relation prediction separately.",
"The key is to pass the decisions between the solvers so that they can examine each other's answer and make harmonious decisions.",
"Specifically, at each spanning step i , we start the iterative inference by setting x 0 = h 0 and solving f ( G i , x 0 ) .",
"After the t -th graph reasoning, we compute the state vector y t , which will be handed over to the concept solver as g ( W, y t ) , as: y t = FFN ( y ) ( x t + ( WV h 1: n ) t ) , where FFN ( y ) is a feed-forward network and WV projects text memories into a value space.",
"Similarly, after the t -th sequence reasoning, we update the state vector from y t to x t +1 as: x t +1 = FFN ( x ) ( y t + H (cid:88) h =1 ( W Vh s 0: n ) ht ) , where FFN ( x ) is a feed-forward network and W Vh projects graph memories into a value space for each head h .",
"After N steps of iterative inference,",
"i,e., x 0 f ( G i , x 0 ) y 1 g ( W, y 1 ) x 1 f ( G i , x N 1 ) y N g ( W, y N ) x N , we finally employ a deep biaffine classifier (Dozat and Manning, 2016) for edge label prediction.",
"The Algorithm 1 AMR Parsing via Graph (cid:28) Sequence Iterative Inference Input: the input sentence W = ( w 1 , w 2 , . . . , w n ) Output: the corresponding AMR graph G // compute text memories 1: h 0 , h 1 , . . . , h n = SequenceEncoder(( BOS , w 1 , . . . , w n )) // initialize graph 2: G 0 = ( nodes = { BOG } , edges = ) // start graph expansions 3: i = 0 4: while True do 5: s 0 , . . . , s i = GraphEncoder( G i ) // the graph memories can be computed *incrementally* 6: x 0 = h 0 // iterative inference 7: for t 1 to N do 8: y t = f ( G i , x t 1 ) // Seq.",
"classifier uses a biaffine function to score each label, given the final concept representation x N and the node vector s 1: m as input.",
"The resulted concept, edge, and edge label predictions will added to the new graph G i +1 if the concept prediction is not EOG , a special concept that we add for indicating termination.",
"Otherwise, the whole parsing process is terminated and the current graph is returned as final result.",
"The complete parsing process adopting the iterative inference is described in Algorithm 1.",
"Our model is trained with the standard maximum likelihood estimate.",
"The optimization objective is to maximize the sum of the decomposed step-wise log-likelihood, where each is the sum of concept, edge, and edge label probabilities.",
"To facilitate training, we create a reference generation order of nodes by running a breadth-first-traversal over target AMR graphs, as it is cognitively appealing (core-semantic-first principle, Cai and Lam, 2019) and the effectiveness of pre-order traversal is also empirically verified by Zhang et al. (2019a) in a depth-first setting.",
"For the generation order for sibling nodes, we adopt the uniformly random order and the deterministic order sorted by the relation frequency in a 1 : 1 ratio at first then change to the deterministic order only in the final training steps.",
"We empirically find that the deterministic-after-random strategy slightly improves performance.",
"During testing, our model searches for the best output graph through beam search based on the log-likelihood at each spanning step.",
"The time complexity of our model is O ( k | V | ) , where k is the beam size, and | V | is the number of nodes.",
"Datasets Our evaluation is conducted on two AMR public releases: AMR 2.0 (LDC0217T10) and AMR 1.0 (LDC2014T12).",
"AMR 2.0 is the latest and largest AMR sembank that was extensively used in recent works.",
"AMR 1.0 shares the same development and test set with AMR, while the size of its training set is only about one-third of AMR 2.0, making it a good testbed to evaluate our model's sensitivity for data size.",
"6 Implementation Details We use Stanford CoreNLP (Manning et al., 2014) for tokenization, lemmatization, part-of-speech, and named entity tagging.",
"The hyper-parameters of our models are chosen on the development set of AMR 2.0.",
"Without explicit specification, we perform N = 4 steps of iterative inference.",
"Other hyper-parameter settings can be found in the Appendix.",
"Our models are trained using ADAM (Kingma and Ba, 2014) for up to 60K steps (first 50K with the random sibling order and last 10K with deterministic order), with early stopping based on development set performance.",
"We fix BERT parameters similar to Zhang et al. (2019a,b) due to the GPU memory limit.",
"During testing, we use a beam size of 8 for the highest-scored graph approximation.",
"7 AMR Preand Post-processing We remove senses as done in Lyu and Titov (2018); Zhang et al. (2019a,b) and simply assign the most frequent sense for nodes in post-processing.",
"Notably, 6 There are a few annotation revisions from AMR 1.0 to AMR 2.0.",
"7 Our code is released at https://github.com/ jcyk/AMR-gs .",
"most existing methods including the state-the-of-art parsers (Zhang et al., 2019a,b; Lyu and Titov, 2018; Guo and Lu, 2018, inter alia) often rely on heavy graph re-categorization for reducing the complexity and sparsity of the original AMR graphs.",
"For graph re-categorization, specific subgraphs of AMR are grouped together and assigned to a single node with a new compound category, which usually involves non-trivial expert-level manual efforts for hand-crafting rules.",
"We follow the exactly same preand post-processing steps of those of Zhang et al. (2019a,b) for graph re-categorization.",
"More details can be found in the Appendix.",
"Ablated Models As pointed out by Cai and Lam (2019), the precise set of graph re-categorization rules differs among different works, making it dif-ficult to distinguish the performance improvement from model optimization and carefully designed rules.",
"In addition, only recent works (Zhang et al., 2019a,b; Lindemann et al., 2019; Naseem et al., 2019) have started to utilize the large-scale pre-trained language model, BERT (Devlin et al., 2019; Wolf et al., 2019).",
"Therefore, we also include ablated models for addressing two questions: (1) How dependent is our model on performance from handcrafted graph re-categorization rules?",
"(2) How much does BERT help?",
"We accordingly implement three ablated models by removing either one of them or removing both.",
"The ablation study not only reveals the individual effect of two model components but also helps facilitate fair comparisons with prior works.",
"Main Results The performance of AMR parsing is conventionally evaluated by SMATCH (F1) metric (Cai and Knight, 2013).",
"The left block of Table 1 shows the SMATCH scores on the AMR 2.0 test set of our models against the previous best approaches and recent competitors.",
"On AMR 2.0, we outperform the latest push from Zhang et al. (2019b) by 3.2% and, for the first time, obtain a parser with over 80% SMATCH score.",
"Note that even without BERT, our model still outperforms the previous state-of-the-art approaches using BERT (Zhang et al., 2019b,a) with 77.3%.",
"This is particularly remarkable since running BERT is computationally expensive.",
"As shown in Table 2, on AMR 1.0 where the training instances are only around 10K, we improve the best-reported results by 4.1% and reach at 75.4%, which is already higher than Model G. R. BERT SMATCH fine-grained evaluation Unlabeled No WSD Concept SRL Reent.",
"most models trained on AMR 2.0.",
"The even more substantial performance gain on the smaller dataset suggests that our method is both effective and data-efficient.",
"Besides, again, our model without BERT already surpasses previous state-of-the-art results using BERT.",
"For ablated models, it can be observed that our models yield the best results in all settings if there are any competitors, indicating BERT and graph re-categorization are not the exclusive key for our superior performance.",
"Fine-grained Results In order to investigate how our parser performs on individual sub-tasks, we also use the fine-grained evaluation tool (Da-monte et al., 2017) and compare to systems which reported these scores.",
"8 As shown in the right block of Table 1, our best model obtains the highest scores on almost all sub-tasks.",
"The improvements in all sub-tasks are consistent and uniform (around 2% 3%) compared to the previous state-of-the-art performance (Zhang et al., 2019b), partly confirm-ing that our model boosts performance via consolidated and harmonious decisions rather than fix-ing particular phenomena.",
"By our ablation study, 8 We only list the results on AMR 2.0 since there are few results on AMR 1.0 to compare.",
"it is worth noting that the NER scores are much lower when using graph re-categorization.",
"This is because the rule-based system for NER in graph re-categorization does not generalize well to unseen entities, which suggest a potential improvement by adapting better NER taggers.",
"Effect of Iterative Inference We then turn to study the effect of our key idea, namely, the iterative inference design.",
"To this end, we run a set of experiments with different values of the number of the inference steps N .",
"The results on AMR 2.0 are shown in Figure 4 (solid line).",
"As seen, the performance generally goes up when the number of inference steps increases.",
"The difference is most noticeable between 1 ( no iterative reasoning is performed) and 2, while later improvements gradually diminish.",
"One important point here is that the model size in terms of the number of parameters is constant regardless of the number of inference steps, making it different from general over-parameterized problems.",
"For a closer study on the effect of the inference steps with respect to the lengths of input sentences, we group sentences into three classes by length and also show the individual results in Figure 4 (dashed lines).",
"As seen, the iterative inference helps more for longer sentences, which confirms our intuition that longer and more complex input needs more reasoning.",
"Another interesting observation is that the performance on shorter sentences reaches the peaks earlier.",
"This observation suggests that the number of inference steps can be adjusted according to the input sentence, which we leave as future work.",
"Effect of Beam Size We are also interested in the effect of beam size during testing.",
"Ideally, if a model is able to make accurate predictions in the first place, it should rely less on the search algorithm.",
"We vary the beam size and plot the curve in Figure 6.",
"The results show that the performance generally gets better with larger beam sizes.",
"However, a small beam size of 2 already gets the most of the credits, which suggests that our model is robust enough for time-stressing environments.",
"Visualization We visualize the iterative reasoning process with a case study in Figure 5.",
"We illustrate the values of t , t as the iterative inference progresses.",
"As seen, the parser makes mistakes in the first step, but gradually corrects its decisions and finally makes the right predictions.",
"Later reasoning steps typically provide a sharper attention distribution than earlier steps, narrowing down the most likely answer with more confidence.",
"Speed We also report the parsing speed of our non-optimized code: With BERT, the parsing speed of our system is about 300 tokens/s, while without BERT, it is about 330 tokens/s on a single Nvidia P4 GPU.",
"The absolute speed depends on various implementation choices and hardware performance.",
"In theory, the time complexity of our parsing algorithm is O ( kbn ) , where k is the number of iterative steps, b is beam size, and n is the graph size (num-ber of nodes) respectively.",
"It is important to note that our algorithm is linear in the graph size.",
"We presented the dual graph-sequence iterative inference method for AMR Parsing.",
"Our method constructs an AMR graph incrementally in a node-by-node fashion.",
"Each spanning step is explicitly characterized as answering two questions: which parts of the sequence to abstract, and where in the graph to construct.",
"We leverage the mutual causalities between the two and design an iterative inference algorithm.",
"Our model significantly advances the state-of-the-art results on two AMR corpora.",
"An interesting future work is to make the number of inference steps adaptive to input sentences.",
"Also, the idea proposed in this paper may be applied to a broad range of structured prediction tasks (not only restricted to other semantic parsing tasks) where the complex output space can be divided into two interdependent parts with a similar iterative inference process to achieve harmonious predictions and better performance."
] | [
"objective",
"objective",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"objective"
] |
[
"Automatic question generation ( QG ) has shown promise as a source of synthetic training data for question answering ( QA ).",
"In this paper we ask: Is textual diversity in QG beneficial for downstream QA ?",
"Using topp nucleus sampling to derive samples from a transformer-based question generator, we show that diversity-promoting QG indeed provides better QA training than likelihood maximization approaches such as beam search.",
"We also show that standard QG evaluation metrics such as BLEU , ROUGE and METEOR are inversely correlated with diversity, and propose a diversity-aware intrinsic measure of overall QG quality that correlates well with extrinsic evaluation on QA .",
"Besides areas such as dialog (Bordes et al., 2017) and tutoring systems (Lindberg et al., 2013), automatic question generation ( QG ) has recently been applied with great success to generating synthetic training examples for question answering ( QA ) (Al-berti et al., 2019; Dong et al., 2019).",
"Yet an important question has remained unexplored: Does increased textual diversity in automatically generated questions lead to better QA ?",
"In Figure 1 we show four questions generated by one of our QG models (details in Section",
"2) from a SQ u AD (Rajpurkar et al., 2016) passage and an answer span (the QG prompt ).",
"The questions are different not only lexically, but also in what information about the answer entity they draw upon and even their use of world knowledge, e.g., Tesla's reputation as a mad scientist.",
"Intuitively, such sample diversity, if sufficiently accurate, could provide QA models with rich training signal.",
"Existing QG work has predominantly relied on customary beam search decoding for generation and n -gram similarity metrics such as BLEU for evaluation (Du et al., 2017; Alberti et al., 2019; On Tesla's 75th birthday in 1931, Time magazine put him on its cover.",
"The cover caption All the world's his power house noted his contribution to electrical power generation.",
"He received congratulatory letters from more than 70 pioneers in science and engineering, including Albert Einstein.",
"Who appeared on Time magazine's cover on his 75th birthday?",
"Which famous scientist was in the cover of Time Magazine in 1931?",
"Which mad scientist received more than a 70 people congratulating him on his birthday?",
"What famous scientist was also 75?",
"Dong et al., 2019; Zhang and Bansal, 2019).",
"1 Such methods/metrics solely optimize/reward similarity with human-generated reference questions treated as the ground truth ( GT ).",
"However, in many open-ended generation tasks where only one or a few of many possible GT s are available through human annotation, this approach directly penalizes diversity by discouraging deviation from the GT (s).",
"In recent years, massively pre-trained neural language models ( LM s) (Devlin et al., 2019; Radford et al., 2019; Liu et al., 2019) have revolutionized NLP .",
"In open-ended text generation, these models show remarkable robustness under sampling (Radford et al., 2019; Holtzman et al., 2020).",
"This observation, coupled with the examples presented in Figure 1, suggests that treating QG for QA as a more open-ended generation problem and relying on the power of modern text generators to produce diverse yet accurate samples might yield better QA results than the current approach of optimizing for the most likely question.",
"1 http://aqleaderboard.tomhosking.co.uk/squad",
"2019) for QG , and sampling questions from it using topp nucleus sampling (Holtzman et al., 2020).",
"Other diversity-promoting text generation techniques existboth at training time (e.g., VAE s (Kingma and Welling, 2014)) and during inference (e.g., topk sampling and diverse beam search (Vi-jayakumar et al., 2018))that have been applied to various NLP tasks: language modeling (Bowman et al., 2016), dialog (Cao and Clark, 2017), visual QG (Jain et al., 2017; Fan et al., 2018), image captioning (Vijayakumar et al., 2018) and so on.",
"We choose nucleus sampling because of its effectiveness, simplicity and speed.",
"Our experiments lead to the following discoveries: \u0000 Nucleus sampling indeed produces better QA results than beam search, even when only one question is generated per prompt.",
"\u0000 QG metrics that only reward similarity with GT are negatively correlated with diversity, and as a result, are inaccurate predictors of downstream QA performance of diversity-promoting QG .",
"\u0000 A measure of QG can be devised that combines diversity with similarity to GT , showing strong correlations with QA performance.",
"We fine-tune a R o BERT a masked LM (Liu et al., 2019) for QG given an answer span within a textual context (as shown in Figure 1), and use nucleus sampling (Holtzman et al., 2020) for generation.",
"Model: Various transformer architectures can be used for text generation (Raffel et al., 2019).",
"Following (Dong et al., 2019; Alberti et al., 2019), we fine-tune a pre-trained masked LM as a prefix LM (Raffel et al., 2019) to predict a question token q t given (1) a prompt p 1: N : a tokenized textual context with special tokens delimiting an answer span, and (2) question tokens q 1: t \u0000 1 , if any, that have already been generated for the given prompt in a left-to-right order.",
"A special separator token separates the question prefix from the prompt.",
"The prompt is encoded using bidirectional attention and question tokens using causal (left-only) attention.",
"We choose R o BERT a as our pre-trained model because of its extended pre-training on large amounts of text (Liu et al., 2019).",
"Our implementation of the QG model is based on Hugging Face's (Wolf et al., 2019) PyTorch implementation of R o BERT a.",
"Fine-Tuning: For each QG training example, the model is asked to predict a single question token q t given the prompt p 1: N , the previous question tokens q 1: t \u0000 1 (teacher-forced), and the mask m at timestep t .",
"All questions end with an EOS token that marks the end of generation.",
"Training attempts to minimize the masked LM loss, i.e., the negative log-likelihood of the GT token q t as the prediction for m in position t : loss t = \u0000 log P ( q t | p 1: N , q 1: t \u0000 1 , m ) Inference: During generation, the fine-tuned R o BERT a QG model outputs a probability distribution over the entire vocabulary at each question timestep t .",
"Topp nucleus sampling ( NS @ p henceforth) samples from the (re-normalized) categorical distribution PN of the nucleus N , which is the smallest subset of vocabulary items that has (1) a cumulative probability mass greater than p , and (2) the highest probability among all such subsets: q t PN ( q t | p 1: N , q 1: t \u0000 1 , m ) By restricting the pool to a high-likelihood region of the vocabulary, compared to topk sampling, NS reduces the chances of generating low-probability items when the original distribution is peaked at one or a few items.",
"Our question generation works by repeated nucleus sampling of question tokens until q t = EOS .",
"To test the effect of QG diversity on QA , we generate questions with both nucleus sampling and beam search from a number of different QG models and compare their performance.",
"General Setup: Considering that performances of different generation methods may vary across models of different capacities, we train eight QG models, each uniquely characterized by: (1) its size ( # of parameters), and (2) the amount of training data it was fine-tuned on.",
"The two model sizes are those of R o BERT a: base (125 M parameters) and large (355 M parameters).",
"For fine-tuning we use the train set of the SQ u AD 1 split by Du et al. (2017).",
"2 This is a three-way split of the public portion of SQ u AD 1 widely adopted in QG literature, with approximately 76k train , 18k dev and 12k test (prompt, question) pairs.",
"We draw varying amounts of samples (ranging from 5% to 100%) at random from the train set to fine-tune each model on, simulating different points on the lowto high-resource 2 https://github.com/xinyadu/nqg/blob/master/data/raw/ %train generator B 1 R 4 MT QAF 1 B 1 R 4 MT QAF 1 5 b = 5 33.9 7.9 39.1 81.1 35.9 8.5 40.7 83.2 p = .",
"In-Domain Experiments: With each QG model, we generate questions for all prompts in the SQ u AD 1D u dev set.",
"These questions are first evaluated using existing generation metrics: BLEU , ROUGE and METEOR .",
"To extrinsically evaluate on QA , we then (1) fine-tune a BERT (Devlin et al., 2019) whole-word-masked (wwm) LM for QA on the generated dev examples from each model, and (2) evaluate on test .",
"For each of the eight QG models, we evaluate beam search ( BEAM henceforth) and NS @ p for different values of p .",
"Our BEAM experiments with the R o BERT a-base model did not show significant performance differences between beam sizes 5 and 10, therefore we report results only for b = 5 in this paper.",
"An important point to note here is that given paragraph-long input prompts in QG for QA , where large numbers of synthetic examples may also be needed in many practical use cases, large beam sizes can become prohibitively expensive from a computational standpoint for transformer-based generators.",
"For NS , we evaluate with p 2 { .",
"1 , .",
"5 , .",
"75 , .",
"95 } .",
"Among these, p = .",
"1 closely approximates greedy decoding, as we observed for all models an average nucleus size of practically 1 in this setup.",
"We also set the maximum number of vocabulary items in a nucleus to 20, which even the largest p values rarely reached in our experiments.",
"Table 1 shows performances (mean over five different seeds) of all generators in BLEU -1 ( B 1 ), ROUGE -4 ( R 4 ) and METEOR (MT), the variant in each metric family that showed the highest correlation with downstream QA performance.",
"We also show QA performances measured by SQ u AD 's of-ficial F 1 score metric, which computes the degree of lexical overlap between the predicted and the target answer.",
"As expected, model performance improves with both model size and # of training instances, both in intrinsic evaluation and on QA .",
"Importantly, however, while BEAM has the best intrinsic evaluation results for all eight models, it is competitive in QA only in the lowest-resource setup (5% training data).",
"On the other hand, NS @ .",
"95 has the lowest QG but the highest QA scores, especially when sufficient training data is available (20% or more).",
"Note that in these experiments we generate a single question per prompt; yet generation diversity across different prompts yields higher-quality QA training data for NS , which is also a faster alternative to BEAM .",
"Sampling five questions per prompt from the large -100% model with NS @ .",
"95 provides additional improvement ( F 1 = 86.4).",
"increase p to make generation more diverse, the chances of NS @ p drawing less likely candidates and thus",
"generating incorrect questions also go up.",
"In Table 1, the gains in QA due to QG diversity are generally greater than any drop in performance likely due to decreased accuracy.",
"To find out if the same holds in a more challenging out-of-domain setup, we perform a zero-shot application (i.e., with no further fine-tuning) of four of the above SQ u AD -trained QG models to N ews QA , a reading comprehension dataset of CNN news articles (Trischler et al., 2017).",
"Table 2 shows results on the answerable subset of N ews QA , with 76k train (from which we extract our QG prompts) and 4k test (used for QA evaluation) samples: while the absolute scores are lower than those in SQ u AD , the relative performances of BEAM and NS are similar both in intrinsic (the best predictor of QA performance for N ews QA was ROUGE -4) and extrinsic (QA F 1 ) evaluation.",
"Generation: To assess the quality of our generated questions in absolute terms, in Table 3 we compare the QA performances of the best QG model above ( large -100%, NS @ . 95 ) and corresponding human annotations ( GT ).",
"Impressively, in-domain model performance on QA is very similar to that of GT , while zero-shot score on N ews QA is also within roughly 4 points of GT .",
"We also evaluate the generator's ability to augment human-generated questions.",
"Taking an approach similar to prior augmentation experiments dataset train source QAF 1 SQ u AD 1D u GT ( dev ) 86.3 SYNTH 86.1 5 -SYNTH 86.4 SYNTH * + GT 88.6 N ews QA GT ( train ) 67.9 SYNTH 63.8 SYNTH * + GT 69.2 Table 3: Diverse QG (SYNTH ; NS @ . 95 ) shows impressive QA results compared to human annotation ( GT ), and in augmenting GT (SYNTH * + GT ).",
"(Dong et al., 2019; Alberti et al., 2019), we generate a large synthetic dataset SYNTH * of 4 million examples from Wikipedia passages.",
"The answer spans in these examples are extracted from their corresponding passages using a separate QA model which we train on ten SQ u AD question types (in-stead of full-length questions): what , which , where , who , when , why , how , how many , how much , and how long .",
"SYNTH * is used to fine-tune a BERT wwm LM for QA , which is finally fine-tuned on the target datasets ( SQ u AD 1D u, N ews QA ).",
"As Table 3 shows, SYNTH * achieves 1.32.3 absolute points improvements for the high-performance large BERT -wwm model.",
"Summary of Results: The above results empirically show that given enough training data and sufficiently powerful QG models: (1) diverse QG leads to strong in-domain and out-of-domain QA training, (2) asking the most likely question (i.e., beam search) every time is less useful, and (3) existing generation metrics are inadequate for evaluating diverse question generators as sources of QA training examples.",
"To better understand the performance of existing generation metrics as measures of diverse QG , we take the set of all 32 samplers in Table 1 (e.g., base -100%p @.75) and randomly generate a large number (100k) of subsets, each consisting of n samplers ( 2 n 32 ) to be evaluated.",
"We assign each n ( # of samplers) to a bin and measure performances of QG metrics separately in each bin.",
"The process is repeated for Table 2.",
"Note that the member sets of a given bin, say n = 5, all contain the same number of generators (5), but the actual selection of generators are generally different in different members of a bin.",
"This setup allows us to evaluate a varying number of generators with different capacities and performance, and to average Figure 2: Performances of existing and proposed generation metrics as measures of diverse QG for QA .",
"Figure 2 shows for all bins a rather poor, for some bins negative, median Spearman's score between the best QG metric ( SQ u AD 1D u: ROUGE 4, N ews QA : ROUGE -1) and downstream QAF 1 .",
"These results provide quantitative confirmation that ROUGE and similar metrics are inadequate evaluators of diverse QG for QA due to their sole focus on accuracy with respect to available GT s.",
"This leads us to our final research question: How to intrinsically measure the overall quality of QG for QA under diverse nucleus sampling?",
"Given the categorical distribution PN of vocabulary items in a model's nucleus N , we propose to measure both its accuracy (relative to GT ) and diversity of generation.",
"Accuracy: Similarly to LM perplexity, for timestep t of evaluation example s , we take the probability PN ( q s,t | p , q s, 1: t \u0000 1 ) of the model (more precisely, its nucleus N ) generating the GT token q s,t , given prompt p and GT history q s, 1: t \u0000 1 .",
"We then average over all evaluation ( s, t ) pairs to compute model accuracy P ( GT ) .",
"Diversity: An intuitive measure of the diversity of a model's nucleus N is the average entropy of PN over all evaluation timesteps.",
"However, entropy is an unbounded measure, and has a non-linear inverse growth relative to our proposed accuracy metric, which makes their mathematical combination difficult.",
"We instead rely on the observation that as we increase p in NS @ p to make generation more diverse, the cardinality of N also goes up, on average, and so does the probability P ( GT 2 N ) that N contains the GT token.",
"Our experiments on both datasets showed that this measure of diversity, computed as the proportion of times N was found to include GT across all timesteps in the QG evaluation data, has high positive correlations with the entropy of PN (Pearson's r : 98%99%, Spearman's : 87%95%).",
"Note that unlike the accuracy metric P ( GT ) , at each timestep t , the diversity metric P ( GT 2 N ) is Boolean: the GT token is either in N or it is not.",
"But importantly, its average across many evaluation timesteps is a probability measure of diversity, which enables a straightforward convex combination with our proposed accuracy metric.",
"Our final QG metric is a weighted sum of accuracy and diversity: w P ( GT )+(1 \u0000 w ) P ( GT 2 N ) , where w 2 [0 , 1] is a tunable parameter reflecting the weight of accuracy relative to diversity.",
"In our experiments, this metric outperforms all existing metrics by a large margin for a wide range of w values.",
"In Figure 2, the median Spearman's score between this metric and QAF 1 in both in-domain ( w = . 7 ) and out-of-domain ( w = . 8 ) evaluation is over 90% for all bins.",
"We observe similar performance differences between the proposed and existing metrics with Pearson's r .",
"Given the scope of this paper, we evaluate the combined metric only on QG , but the underlying ideas apply to diverse text generation in general.",
"Further experiments are necessary to evaluate the metric on other generation tasks.",
"While diversity of generation has received significant attention in other text generation problems (e.g., dialog), we show in this paper that it is also an important and measurable dimension of quality in question generation for QA .",
"We hope that our work will encourage further exploration of diversity-promoting QG and its evaluation.",
"Possible future directions include a systematic study of different aspects of QG diversity (e.g., lexical and factual) and controlled diversification of individual aspects in generation.",
"We thank the anonymous reviewers for their valuable feedback."
] | [
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"other"
] |
[
"Abstract Automatic dialogue coherence evaluation has attracted increasing attention and is crucial for developing promising dialogue systems.",
"However, existing metrics have two major limitations:",
"(a) they are mostly trained in a simpli-fied two-level setting (coherent vs. incoherent), while humans give Likert-type multi-level coherence scores, dubbed as quantifiable;",
"(b) their predicted coherence scores cannot align with the actual human rating standards due to the absence of human guidance during training.",
"To address these limitations, we propose Quanti fiable D ialogue C oherence E valuation (QuantiDCE), a novel framework aiming to train a quantifiable dialogue coherence metric that can reflect the actual human rating standards.",
"Specifically, QuantiDCE includes two training stages, Multi-Level Ranking (MLR) pre-training and Knowledge Distillation (KD) fine-tuning.",
"During MLR pre-training, a new MLR loss is proposed for enabling the model to learn the coarse judgement of coherence degrees.",
"Then, during KD fine-tuning, the pretrained model is further finetuned to learn the actual human rating standards with only very few human-annotated data.",
"To advocate the generalizability even with limited finetuning data, a novel KD regularization is introduced to retain the knowledge learned at the pre-training stage.",
"Experimental results show that the model trained by QuantiDCE presents stronger correlations with human judgements than the other state-of-the-art metrics.",
"1 1 Introduction Dialogue coherence, which requires a response to be fluent, consistent and context-related, is an essential property for developing promising dialogue Corresponding Author.",
"systems (Cervone et al., 2018).",
"However, it is still challenging to evaluate the coherence of a response generated by a dialogue system.",
"Although human evaluation is always considered as the most accurate way to evaluate the coherence, it is expensive and high-latency, which cannot meet the evaluation demand of the frequent development of dialogue systems.",
"Therefore, automatic evaluation metrics are developed to serve as human proxies that can rapidly compute the dialogue coherence and return relatively accurate results.",
"The current widely used metrics measure the lexical word-overlap between generated responses and reference responses, such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004).",
"However, they have been demonstrated to be biased and correlate poorly with human judgements since no semantic information is considered (Liu et al., 2016; Novikova et al., 2017).",
"To overcome this issue, researchers turned to develop learnable metrics based on neural networks that incorporate the semantic information, such as RUBER (Tao et al., 2018), BERT-RUBER (Ghazarian et al., 2019) and GRADE (Huang et al., 2020).",
"However, these metrics deviate from the actual human rating due to two limitations.",
"First, they simplify the coherence evaluation task in a two-level setting, i.e., coherent or incoherent, by maximizing the differences between the positive coherent dialogues and the negative incoherent ones obtained by some negative sampling strategies.",
"In contrast, humans usually adopt Likert scaling and give coherence scores from multiple levels like 1 to 5, as shown in Figure 1. Second, to avoid relying on large-scale human-annotated data, they are mostly trained in a purely unsupervised manner and cannot align with the human rating due to the absence of introducing the actual human rating standards during training.",
"To address the above limitations, we propose a novel dialogue coherence metric training framework, named as Quanti fiable D ialogue C oherence E valuation (QuantiDCE).",
"This framework consists of two training stages: Multi-Level Ranking (MLR) pre-training and Knowledge Distillation (KD) finetuning.",
"At the MLR pre-training stage, a new multilevel ranking (MLR) loss is proposed for learning the coarse judgement of coherence degrees.",
"Specifically, the MLR loss separates the context-response pairs with different coherence levels and compacts the pairs within the same level in one-dimensional score space.",
"As a result, the pretrained model is able to distinguish different coherence-level dialogue responses for a given context and predicts more accurate coherence scores.",
"At the KD finetuning stage, the pretrained model is further finetuned to learn the actual human rating standards with only very few human-annotated coherence scores.",
"To mitigate overfitting into the scarce annotated data during fine-tuning, a novel knowledge distillation regularization loss is introduced to retain the knowledge learned at the pre-training stage, where the pretrained model (teacher) provides the soft targets for the model during fine-tuning (stu-dent).",
"Experimental results show that the metric trained by our QuantiDCE obviously outperforms the other state-of-the-art metrics in terms of the Pearson, Spearman and Kendall correlations with human judgements by around 5% points on average.",
"To summarize our contributions: 1) We propose QuantiDCE, a novel quantifiable training framework for dialogue coherence evaluation, which aims to align the automatic scores with the actual human rating standards via MLR pre-training and KD fine-tuning.",
"To the best of our knowledge, it is the first attempt to consider the quantifiable problem for dialogue coherence evaluation.",
"2) Extensive experiments demonstrate the effectiveness of our QuantiDCE, which enables the trained metric to have obviously stronger correlations with human judgements than the other state-of-the-art metrics.",
"Automatic Coherence Evaluation.",
"The widely used automatic metrics, such as BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005) and ROUGE (Lin, 2004), use statistical rules to measure the degree of lexical word-overlap between generated responses and reference responses.",
"However, these metrics have been demonstrated to correlate poorly with human judgments due to the absence of semantic information (Liu et al., 2016; Novikova et al., 2017).",
"Therefore, the subsequent metrics are considered to incorporate the semantic information.",
"For instance, BERTScore (Zhang et al., 2020) turns to measure the soft semantic word-overlap rather than the hard lexical word-overlap like BLEU.",
"Moreover, learnable metrics encoding the semantic information have been attracting interests recently, which are trained in a supervised manner with large-scale human-annotated data, such as ADEM (Lowe et al., 2017), or trained in an unsupervised manner with automatically constructed data, such as RUBER (Tao et al., 2018) and BERT-RUBER (Ghazarian et al., 2019).",
"Furthermore, the recently proposed coherence metric, GRADE (Huang et al., 2020), introduces the graph information of dialogue topic transitions and achieves the current state-of-the-art results.",
"Note that these learnable metrics are trained in a two-level training objective to separate the coherent dialogues from the incoherent ones, while our QuantiDCE models the task in a multi-level setting which is closer to the actual human rating.",
"Knowledge Distillation.",
"Knowledge distillation (KD) is a method that transfers the knowledge from a large trained teacher model to a smaller student model by using the soft targets provided by the teacher (Hinton et al., 2015).",
"In recent years, KD has been applied to many specific tasks (Sun et al., 2020; Wei et al., 2019; Kim and Rush, 2016; Sourty et al., 2020).",
"Unlike these previous works, we use KD to retain knowledge learned at the pre-training Stage One: MLR Pre-training BERTMLPBERTMLP Human rating Stage Two: KD Fine-tuning Teacher model KD Student model Level-1 Centroid Score Level-2 Centroid Score Level-3 Centroid Score Level-1 Scores Dialogue Example Context Level-1 Level-2 Level-3 Response separation compactness Level-2 Scores Level-3 Scores kd_ Separation loss Compactness Loss Ordering Loss Concat Concat Concat Figure 2: The overall pipeline of our QuantiDCE, consisting of two training stages which are marked by the blue and the black one-way arrows.",
"In this section, we present QuantiDCE, a two-stage framework for dialogue coherence metric learning, consisting of Multi-Level Ranking (MLR) pre-training and Knowledge Distillation (KD) finetuning.",
"As illustrated in Figure 2, given a metric model M (Section 3.1), QuantiDCE enables M to learn multi-level representations for context-response pairs with different levels of coherence degrees during the pre-training stage (Section 3.2), and further to learn the rating standards of humans with only a fraction of data during the fine-tuning stage (Section 3.3).",
"After these two training stages, the quantifiable gap between automatic metrics and humans can be obviously reduced.",
"In our QuantiDCE framework, the metric model M is composed of: (1) an encoder network for encoding the input context-response pairs into features and (2) a predictor network for transforming the encoded features into coherence scores.",
"Specifically, we adopt BERT (Devlin et al., 2019) as the encoder network and a multi-layer perceptron (MLP) as the predictor network.",
"Given a context c = { c 1 , , c m } and a response r = { r 1 , , r n } where c i and r i are tokens of the context and the response respectively, the c and r are concatenated as { [CLS] , c 1 , , c m , [SEP] , r 1 , , r n , [SEP] } , denoted as [ c ; r ] .",
"Then the coherence score s of the response r w.r.t. the context c is predicted by: s = MLP ( BERT ([ c ; r ])) , (1) where MLP is a three-layer fully-connected network in which the activation functions of the three layers are two exponential linear units (Clevert et al., 2016) and a sigmoid function, respectively.",
"For learning the coarse judgement of coherence degrees without the direct supervision of score annotations, the model M is first pretrained by minimizing a new multi-level ranking (MLR) loss on a large-scale dialogue dataset.",
"Concretely, the MLR loss is composed of a separation loss, a compactness loss and an ordering loss.",
"R i = { ( r ji, 1 , , r ji,K ) } Lj =1 is a response set with L coherence levels 2 and K responses for each level, the model M is trained by minimizing the following MLR loss:",
"where (cid:96) sepi , (cid:96) comi , and (cid:96) ordi refer to the separation loss, the compactness loss and the ordering loss of the i th example, respectively.",
"The separation loss aims to separate the features of context-response pairs with different coherence levels by separating the coherence scores of the different pairs 3 .",
"Moreover, to efficiently compute the loss, we first compute the centroids of the context-response pairs belonging to the same coherence level for the i th dialogue example, i.e., e i = { e ji = (cid:80) Kk =1 s j i,k | j [1 , L ] , e j i R } where s ji,k is the coherence score of the context-response pair ( c i , r ji,k ), and the separation loss between the centroids is then computed as follows: (cid:96) sepi = L 1 (cid:88) j =1 L (cid:88) l = j +1 max (0 , w d( e ji , e li )) , (3) where d( ) is the L1 distance, is the lower bound for the distance between two centroids, and w = l j is the distance weight used for amplifying the lower bound w.r.t. the coherence-level gap.",
"The compactness loss aims to compact the pairs within the same level, which served as a regularization role to avoid the occurrence of outliers for each coherence level.",
"Specifically, the coherence score s j i,k is forced to be closer to the corresponding centroid e ji as follows: (cid:96) comi = L (cid:88) j =1 K (cid:88) k =1 max (0 , d( e ji , s ji,k ) ) , (4) where is the upper bound for the distance between the centroid of a certain coherence level and the score within this level.",
"2 The coherence level is in ascending order, i.e., the response in a higher level is more coherent than the lower one.",
"3 We also tried to directly restrict the features of differentlevel pairs to be separated, but the performance dropped compared with restricting the scores.",
"The ordering loss is finally introduced to assure that the rank order of the predicted scores satisfies the pre-defined order of coherence degrees, i.e., s ji,k < s j +1 i,k , j [1 , L 1] , k [1 , K ] .",
"It is critical since the separation loss only restricts the scores of the pairs from different coherence levels to be separated and this restriction is also satisfied when the scores of the highest level are lower than the scores of the lowest level.",
"Similar to the separation loss, the ordering loss is also computed between each two centroids as follows: (cid:96) ordi = L 1 (cid:88) j =1 L (cid:88) l = j +1 max (0 , e li e ji ) .",
"The model M pretrained by the MLR loss is further trained at the KD fine-tuning stage to directly learn the actual human rating standards with only a fraction of annotated data.",
"Formally, given a training dataset D ft = { ( c i , r i , s i ) } N 2 i =1 where c i , r i and s i are the dialogue context, the corresponding response and the human-annotated coherence score of r i w.r.t. c i respectively, the previous fine-tuning approach for the scoring task usually optimizes the model M with an MSE loss between the predicted score s i and the human score s i : (cid:96) msei = ( s i s i ) 2 .",
"However, by minimizing (cid:96) msei for each example, the model M will be easily over-fitting on the very few annotated data, and thus the model generalizability will be dramatically reduced.",
"To overcome this issue, a novel knowledge distillation (KD) regularization is introduced for retaining the knowledge learned at the MLR pre-training stage.",
"Concretely, the pretrained model M is treated as the teacher model that provides the soft targets for the student model M which is entirely copied from M .",
"And we adopt the distillation objectives of TinyBERT (Jiao et al., 2020), including the distillations of the embedding layer, the Transformer layers and the prediction layer.",
"The KD loss is then formulated as: (cid:96) kdi = T +1 (cid:88) t =0 || O ti O ti || 22 + T (cid:88) t =1 || A ti A ti || 22 , (7) where || || 22 indicates the squared L2 norm, T is the number of the Transformer layers, O ti and O ti are Algorithm 1 Training Procedure of QuantiDCE Input: training datasets D pt and D ft , metric model M Output: student model M 1: initialize M with BERTBASE 2: for all ( c i , R i ) D pt do 3: S i = M ( c i , R i ) 4: compute the centroids e i for S i 5: compute (cid:96) sepi and (cid:96) ordi for e i 6: compute (cid:96) comi between e i and S i 7: compute L mlr 8: update M to minimize L mlr 9: end for 10: initialize M with M 11: for all ( c i , r i , s i ) D ft do 12: O i , A i = M ( c i , r i ) 13: s i , O i , A i = M ( c i , r i ) 14: compute (cid:96) msei between s i and s i 15: compute (cid:96) kdi between O i , A i and O i , A i 16: compute L kd mse 17: update M to minimize L kd mse 18: end for 19: return student model M the t th layer outputs of M and M respectively, A ti and A ti are the attention matrices of the t th transformer layer.",
"Note that the layer 0 and the layer T+1 refer to the embedding layer and the prediction layer respectively.",
"Overall, the loss function for KD fine-tuning, named as KD-MSE loss, is the weighted sum of (cid:96) msei and (cid:96) kdi across the whole training dataset D ft : L kd mse = 1 N 2 N 2 (cid:88) i =1 ( (cid:96) msei + (cid:96) kdi ) , (8) where and are hyperparameters, and we empirically found that = 1 and = 5 performs well.",
"The overall training procedure is summarized in Algorithm 1. 4 Experiments 4.1 Experimental Setup Baseline Metrics.",
"We compare the metric model trained by our QuantiDCE with eight popular automatic dialogue metrics, including three lexical word-overlap metrics: BLEU (Papineni et al., 2002), ROUGE (Lin, 2004) and METEOR (Banerjee and Lavie, 2005), one semantic word-overlap metric, BERTScore (Zhang et al., 2020), and four learnable metrics: ADEM (Lowe et al., 2017), BERT-RUBER (Ghazarian et al., 2019), BLEURT (Sellam et al., 2020) and GRADE (Huang et al., 2020).",
"Evaluation.",
"Our QuantiDCE and the baselines are evaluated by computing the correlations between the model-predicted scores and the human-rated scores.",
"Specifically, we adopt Pearson, Spearman and Kendall as the correlation measures and a large-scale human judgement benchmark (Huang et al., 2020) to provide the human-rated scores.",
"This benchmark contains 1,200 unique (context, response, human-rated score) triplets for metric evaluation where the contexts were randomly selected from the test set of three chit-chat datasets including DailyDialog (Li et al., 2017), ConvAI2 (Dinan et al., 2019) and EmpatheticDialogues (Rashkin et al., 2019), and the responses were produced by both the retrieval-based dialogue models and the generation-based ones to assure response diversity.",
"Training Datasets.",
"We use two datasets, Daily-Dialog++ 4 and DailyDialogEVAL 5 , to support the pre-training and fine-tuning of QuantiDCE, respectively.",
"The DailyDialog++ dataset (Sai et al., 2020) contains over 11K conversations, which augments the original DailyDialog dataset with multiple responses of different quality levels including five golden reference responses, five adversarial irrelevant responses and five random selected responses for each context.",
"Therefore, in this work, we set the number of coherence levels L = 3 where the pairs containing the random responses, the adversarial responses and the reference responses respectively belong to the levels from 1 to 3.",
"As to the fine-tuning data, we use the DailyDialog human judgement dataset, denoted as DailyDialogEVAL, which is a subset of the adopted evaluation benchmark (Huang et al., 2020), with 300 human rating data in total, and randomly split the data into training (90%) and validation (10%) sets.",
"Implementation Details.",
"We use BERTBASE to initialize the encoder network, which is in line with the current SOTA metric, GRADE.",
"For the MLR pre-training, we pretrain our model for 5 epochs with batch size 3 and learning rate 2e-5 where the lower bound for the separation loss = 0.3 and the upper bound for the compactness loss 4 https://github.com/iitmnlp/ Dialogue-Evaluation-with-BERT 5 https://github.com/li3cmz/GRADE Metric Pearson Spearman Kendall Average ConvAI2 BLEU 0.003 * 0.128 0.088 0.073 ROUGE 0.136 0.140 0.097 0.124 METEOR 0.145 0.181 0.123 0.15 BERTScore 0.225 0.225 0.154 0.201 ADEM 0.026 * 0.037 * 0.049 * 0.037 BERT-RUBER 0.266 0.266 0.185 0.239 BLEURT 0.152 0.149 0.103 0.135 GRADE 0.496 0.503 0.356 0.452 QuantiDCE 0.554 0.554 0.395 0.501 EmpatheticDialogues BLEU -0.051 * 0.002 * 0.005 * -0.015 ROUGE 0.029 * -0.013 * -0.010 * 0.002 METEOR 0.118 0.055 * 0.04 * 0.071 BERTScore 0.046 * 0.033 * 0.021 * 0.033 ADEM 0.007 * 0.009 * 0.040 * 0.019 BERT-RUBER -0.022 * -0.040 * -0.029 * -0.030 BLEURT 0.203 0.192 0.13 0.175 GRADE 0.350 0.344 0.243 0.312 QuantiDCE 0.412 0.393 0.274 0.360 Table 1: Correlations between automatic evaluation metrics and human judgements on two datasets (Con-vAI2 and EmpatheticDialogues).",
"= 0.1.",
"For the KD fine-tuning, we further fine-tune the pretrained model for 20 epochs with batch size 10 and learning rate 5e-6.",
"For all the training, BERTAdam is used as the optimizer with 1 = 0 .",
"9 and 2 = 0 .",
"999 .",
"For the Transformer-layer distillation, we distill all the Transformer layers since the model architectures of the teacher and the student are exactly the same.",
"Metric Performance.",
"The correlation results of QuantiDCE and the other baseline metrics on the large-scale human judgement benchmark are presented in Table 1, including the ConvAI2 and the EmpatheticDialogues datasets.",
"6 For a fair comparison, the learnable baseline metrics, ADEM, BERT-RUBER and GRADE, are trained on the training dataset we adopted, i.e., DailyDialog++.",
"7 Generally, QuantiDCE achieves an absolute averaged correlation improvement by around 5% points over the current SOTA, GRADE.",
"Besides, all the results of QuantiDCE are statistically significant with p-value < 0.01.",
"evaluation since we used it for fine-tuning.",
"7 BLEURT was not trained on DailyDialog++ since this dataset is not suitable for the BLEURT pre-training strategy.",
"Instead, we trained BLEURT with the fine-tuning data we used.",
"The training details of these baseline metrics are provided in Appendix A. Loss Pearson Spearman Kendall Average ConvAI2 BCE 0.505 0.505 0.361 0.457 Ranking 0.507 0.504 0.360 0.457 SupCon 0.495 0.523 0.367 0.462 FAT 0.516 0.521 0.371 0.469 Vanilla MLR 0.522 0.536 0.379 0.479 MLR (ours) 0.554 0.554 0.395 0.501 EmpatheticDialogues BCE 0.354 0.353 0.243 0.317 Ranking 0.399 0.389 0.272 0.353 SupCon 0.332 0.315 0.22 0.289 FAT 0.381 0.358 0.245 0.328 Vanilla MLR 0.403 0.387 0.267 0.352 MLR (ours) 0.412 0.393 0.274 0.360 Table 2: Correlations between human judgements and the metric models trained with different losses during pre-training and the same KD-MSE loss during finetuning.",
"Pre-Training Objective.",
"To verify the superiority of our pre-training objective, namely the MLR loss, we investigated the performance of several existing loss functions for pre-training compared with ours.",
"Specifically, two categories of loss functions used for metric training are adopted, including",
"(a) the two-level setting and",
"(b) the multi-level setting.",
"The binary cross entropy (BCE) loss and the margin ranking loss are adopted for the two-level setting, while another three loss functions are adopted for the multi-level setting, including the supervised contrastive (SupCon) loss (Khosla et al., 2020), the fast-approximated triplet (FAT) loss (Yuan et al., 2019) and the vanilla MLR loss (Lin et al., 2020) 8 .",
"As shown in Table 2, the performance of our MLR loss is the best among all the pre-training objectives.",
"And we also found that the multi-level setting losses perform better than the two-level ones, especially on the ConvAI2 dataset.",
"Moreover, in order to more intuitively analyze the performances of these pre-training objectives, we also visualize the encoded features and the predicted scores of the model M after being pretrained by the above loss functions on the DailyDialog++ dataset without fine-tuning.",
"9 As shown in Figure 3,",
"(a) the BCE loss cannot separate the level-1 scores from the level-2 ones and the corresponding features are also mixed;",
"(b) the FAT loss, on the other hand, separates the features of different levels well, but does not consider the relative gaps where the distances between the level-1 and level-3 features are 8 The details of these pre-training loss fucntions are provided in Appendix B. 9 The visualization results of the ranking loss, SupCon loss and Vanilla MLR loss are provided in Appendix C.",
"not larger than those between level-1 and level-2;",
"(c) in contrast, our MLR loss separates both the features and the scores well and also considers the relative gaps between different levels.",
"Fine-Tuning Objective.",
"Furthermore, we also verified the effectiveness of our KD-MSE loss during fine-tuning by comparing with other fine-tuning losses, including the pure MSE loss without KD regularization as shown in Equation 6 and the same MSE loss except for freezing the encoder network and only finetuning the predictor network i.e. the MLP, denoted as MSE (fix encoder).",
"As the results shown in Table 3, compared with the other two losses, the model finetuned by our KD-MSE loss has the highest correlation results on both ConvAI2 and EmpatheticDialogues.",
"Moreover, by compar-0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Score 3 2 1 L e v e l Figure 4: Score visualization on the dailydialog++ dataset where the scores are predicted by our QuantiDCE after KD fine-tuning.",
"ing the results of MSE and KD-MSE, we can find that introducing KD regularization leads to obvious averaged correlation improvements by 20.2% points on ConvAI2 and 11.3% points on EmpatheticDialogues, which verifies the effectiveness of the KD loss.",
"Besides, we also reported the last-epoch correlation results on the training dataset, DailyDialogEVAL.",
"And the results of MSE and MSE (fix encoder) indicate the phenomena of overfitting and under-fitting into DailyDialogEVAL respectively, which explain the reasons of their low performance on the two evaluation datasets.",
"In contrast, our KD-MSE loss enables the model to learn the actual human rating standards from the scarce annotated data and avoid overfitting it si-Metric Pearson Spearman Kendall Average QuantiDCE 0.554 0.554 0.395 0.501 w/o MLR pre-training 0.373 0.357 0.246 0.325 w/o separation loss 0.388 0.416 0.289 0.364 w/o compactness loss 0.526 0.550 0.390 0.489 w/o ordering loss -0.494 -0.522 -0.371 -0.462 w/o KD fine-tuning 0.531 0.540 0.381 0.484 Table 4: Ablation studies on the ConvAI2 dataset by removing one of the component in QuantiDCE, including the MLR loss (w/o MLR pre-training), the KD+MSE loss (w/o KD fine-tuning), and three secondary losses of the MLR loss.",
"multaneously.",
"Finally, in Figure 4, we present the visualization of the scores predicted by our QuantiDCE after KD fine-tuning.",
"Compared with the score distributions before fine-tuning in Figure",
"3(c), the finetuned score distributions of the level-1 and level-3 are wider and partly overlap with the level-2 distribution.",
"It is predictable as the judgements of coherence are always subjective and humans tend to give vague and middle scores instead of extremely high or low scores.",
"Component Analysis.",
"To verify the contributions of the core components in our QuantiDCE, we further conducted ablation studies on the ConvAI2 dataset.",
"As shown in Table 4, both the MLR pretraining and KD fine-tuning contribute to the better performance of QuantiDCE.",
"Besides, we also conducted ablations by removing one of the secondary loss during MLR pre-training, including the separation loss, the compactness loss and the ordering loss.",
"The results show that the performance ben-efits from all these losses in which the separation loss and the ordering loss are crucial for training a metric with strong and positive human correlations.",
"Number of Data for Fine-Tuning.",
"Moreover, we also investigated how the scale of data for finetuning effects the model performance by increasing the number of fine-tuning data 5% each time from zero.",
"The trend of the model performance is presented in Figure 5.",
"We observed that minimizing our KD-MSE loss made the correlation results have a gradually increasing trend after an initial decrease.",
"10 More specifically, the result achieved the standard before fine-tuning at around the 70% data scale and continued increasing until 100% with a final improvement by around 2% points.",
"For comparison, the performance trends of MSE and MSE (fix encoder) are also provided.",
"And the results present overall decreasing trends of the model performance, which indicates that the model trained by MSE or MSE (fix encoder) cannot benefit from the increasing of data scale, due to the severe overfitting or under-fitting.",
"Therefore, to effectively utilize the limited data, it is important to enable the update of the entire network and add some constraints to avoid over-fitting, such as our proposed KD regularization.",
"To illustrate the performance of QuantiDCE, two representative examples are shown in Table 5 .",
"The first example shows the strength of QuantiDCE where the coherence score given by ours is closer to the human rating score compared with the extremely high score given by GRADE.",
"However, in the second example, both our QuantiDCE and GRADE deviate from the human score, possibly because the number of coherence levels we adopted in this work ( L = 3) is insufficient as humans usually consider more levels of dialogue coherence.",
"10 The initial decrease probably attributes to the randomness of data sampling where the smaller the sampling ratio is, the higher the probability that noisy samples dominate the sampled data will be.",
"And overfitting into the noisy samples leads to the performance decrease.",
"In this paper, we propose QuantiDCE, a novel training framework aiming to bridge the gap between the training objective and the actual human rating and train a quantifiable dialogue coherence metric.",
"In general, QuantiDCE includes two training stages, MLR pre-training for learning the coarse human judgements of dialogue coherence degrees, and KD fine-tuning for learning the actual human rating standards.",
"Experimental results show that the metric trained by QuantiDCE presents strong correlations with human judgements.",
"For future work, it is interesting to investigate a more efficient way to obtain multi-level data and extend the multilevel setting into the general evaluation for natural language generation.",
"We thank all anonymous reviewers for their constructive comments.",
"This work was supported in part by National Key R&D Program of China under Grant No. 2020AAA0109700, National Natural Science Foundation of China (NSFC) under Grant No.U19A2073 and No.61976233, Guangdong Province Basic and Applied Basic Research (Regional Joint Fund-Key) Grant No.2019B1515120039, Shenzhen Fundamental Research Program (Project No. RCYX20200714114642083, No. JCYJ20190807154211365), Zhijiang Lab's Open Fund (No. 2020AA3AB14) and CSIG Young Fellow Support Fund."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"We present V ector of L ocally A ggregated E mbeddings ( VLAE ) for effective and, ultimately, lossless representation of textual content.",
"Our model encodes each input text by effectively identifying and integrating the representations of its semantically-relevant parts.",
"The proposed model generates high quality representation of textual content and improves the classification performance of current state-of-the-art deep averaging networks across several text classification tasks.",
"Representation learning algorithms can reveal intrinsic low-dimensional structure in data (Rumel-hart et al., 1986; Bengio et al., 2013; LeCun et al., 2015).",
"In particular, deep averaging networks (DANs) are effective for text classification (Shen et al., 2018; Arora et al., 2017; Wieting et al., 2016; Iyyer et al., 2015).",
"They achieve their improvement through use of word embeddings, weighted averaging, and deepening networks.",
"The above works show that DANs can outperform RNNs and CNNs in text classification while taking only a fraction of their training time.",
"In this work, with a special focus on DANs, we study the effect of information loss associated with average word embeddings and develop algorithms that are robust against information loss for text representation.",
"We show that divergence of word embeddings from their average can be considered as a good proxy to quantify information loss; in particular, longer documents suffer from significant information loss when represented by average word embeddings.",
"These results inspire our work to develop a novel representation learning approach based on Vector of Locally Aggregated Descriptors (VLAD) (Jegou et al., 2010; Arandjelovic and Zisserman, 2013)an effective approach to integrate image descriptors for large scale image datasets.",
"Our model identifies semantically-relevant parts of documents and locally integrates their representations through clustering and autoencoding.",
"In contrast to averaging, our model prevents larger semantically-relevant parts of inputs to dominate final representations.",
"It improves DANs by 5 .",
"30 macro-F1 points in classifying longer texts and show comparable performance to them on shorter text.",
"How can information loss be quantified when word embeddings are averaged?",
"How important it is to address information loss when representing textual content?",
"Are representation learning algorithms robust against information loss?",
"We conduct experiments to answer these questions with respect to deep averaging network (DANs).",
"Our study can inspire works in more complex averaging approaches such as those reported in (Torabi Asr et al., 2018; Kiela et al., 2015) as well as recent works on unsupervised semantic similarity (Pagliardini et al., 2018).",
"We use the DAN developed in (Joulin et al., 2017) and several datasets containing short and long documents to answer these questions.",
"Let's assume a d -dimensional word embedding space.",
"We quantify the amount of information loss in the average word embedding vector of a given document S R n d by computing the average divergence (or distance) between its word embeddings, w i R d i { 1 . . . n } , and their average vector, s = 1 /n (cid:80) i w i , s R d , as follows: divergence = 1 n (cid:88) i (1 cosine ( s , w i )) .",
"Figures",
"1(a)1(c) show strong positive correlation between divergence and document length IMDb Reddit Twitter 0.54 0.56 0.58 0.60 0.62 0.64 0.66 0.68 0.70 divergence from average embedding 0 50 100 150 200 250 a v e r a g e d o c u m e n t l e n g t h document_length trend_line",
"across long and short text datasets.",
"Given these results and if we assume longer documents should suffer from greater information loss if represented by average word embeddings, divergence from mean can be a good proxy to quantify information loss associated with average embeddings.",
"As Figures",
"1(d) and",
"1(e) show, DAN's macro-F1 classification performance considerably decreases as divergence (or text length) increases for IMDb and Reddit datasets; note that we sort and bin instances based on their divergence values and report average macro-F1 for each bin.",
"In particular, as the trend lines in Figures",
"1(d) and",
"1(e) show, the average macro-F1 performance drops from 0 .",
"86 and 0 .",
"82 on shorter IMDb and Reddit posts to 0 .",
"82 and 0 .",
"71 on their longer posts respectively.",
"In addition, the result on Twitter dataset, Figure",
"1(f), shows that DANs are robust against small information loss, i.e. small divergence values below 0 .",
"55 do not inversely affect macro-F1 performance.",
"This result is also observed on the other two datasets, see macro-F1 performance for small divergence values ( 0 . 55 ) in Figures",
"1(d) and",
"1(e).",
"The above experiments show that",
"(a): significant information loss can occur when word embeddings are averaged, in particular, when representing longer documents, and",
"(b) such information loss can inversely affect the performance of downstream classifiers like DANs on longer texts.",
"In this paper, we develop an effective representation learning model to tackle this problem.",
"We propose to utilize semantically-relevant parts of inputs to tackle information loss associated with average word embeddings.",
"Assuming that semantically-relevant words are closer to each other in semantic space (constructed over a global vocabulary), we expect divergence between words in semantically-relevant parts of inputs (i.e. information loss associated to their average word embedding) to be very small.",
"Thus, as Figure 2 illustrates, we propose to cluster the semantic space to first identify semantically-relevant parts of inputs over a global vocabulary; we then effectively integrate these parts to represent documents.",
"Let's assume a global vocabulary V in which words are represented in a d -dimensional space, w R d .",
"As Figure 2 shows, we first cluster this semantic space into k clusters through the following objective function over V : min (cid:88) w V || f ( C , w ) w || 2 , (2) where C is the set of k cluster centers, |C| = k , and f ( C , w ) returns the nearest cluster center c C to the embedding vector w based on cosine similarity among embeddings or Euclidean distance in case of K-Means.",
"1 Given a document S R n d with an arbitrary number of n 1 words, and the above k cluster centers, we compute the representation of the document in each cluster c i , i = 1 . . . k as follows: a i = 1 z i (cid:88) j : f ( C , w j )= c i w j (3) z i = | j : f ( C , w j ) = c i | , where a i R d indicates the representation of the document at cluster c i and is obtained by taking the average embedding of words of the document that have been assigned to cluster c i according to Equation (2), and z i is the number of such words in cluster c i .",
"To this end, each document can be represented by A R d k which is obtained by concatenating its cluster-level representations.",
"Note that we didn't observe any performance difference between the above averaging process versus computing residuals (differ-ences between word embeddings and corresponding cluster centroids) which is commonly used to represent cluster-level image descriptors (Jegou et al., 2010; Arandjelovic and Zisserman, 2013) in image processing.",
"Since A s are of fixed length, they can be readily used as features in traditional classification and clustering algorithms.",
"However, they can cause ef-ficiency issues because of their large size ( d k ); note that the typical value for embedding dimension d is 300 (Pennington et al., 2014; Mikolov et al., 2013).",
"To tackle this issue, we further integrate cluster-level representations, at the cost of some further information loss, to create representations of lower dimension for inputs.",
"In particular, given all input documents with k cluster-level representations A R d k for each document, we develop an autoencoder with one 1 This problem can be solved through gradient descent seeded with an initial set of k examples drawn uniformly at random from V (Bottou and Bengio, 1995; Sculley, 2010).",
"hidden layer that integrates these cluster-level representations to create a final representation for each document, vector a R d m where m is the dimensionality reduction parameter and m d is length of the representation (final layer of the encoder) and is smaller than d k for m < k .",
"Training a single-layer autoencoder corresponds to optimizing the learning parameters to minimize the overall loss between inputs and their reconstructions.",
"For real-valued A , squared loss is often used (Vincent et al., 2010), i.e. l ( A ) = || A A || 2 where A R d k is reconstruction of A and generated by the decoder from a .",
"Our intuition is that if a leads to a good reconstruction of A , it has retained all information available in the input.",
"We refer to a R d m as the Vector of Locally Aggregated Embeddings (VLAE).",
"We expect this final representation to be robust against information loss due to its cluster-level local aggregation which prevents larger portions of semantically-similar words to dominate the representation.",
"Data: We investigate VLAEs in three binary classification tasks: sentiment classification on IMDb (Maas et al., 2011), disease-text classification on Reddit, where the task is to classify reddit posts as relevant or irrelevant to specific diseases, and churn prediction on Twitter (Amiri and Daume III, 2015), where the task is to clas-sify/predict if given tweets indicate user intention about leaving brands, e.g. the tweet my days with BRAND are numbered is a churny tweet.",
"See details in Table 1.",
"For pre-processing, we change all texts to lowercase, and remove stop words, user names, and URLs from texts.",
"Settings: We use validation data for hyperpa-rameter tuning and model selection.",
"We use 300 dimensional word embeddings ( d = 300 ) provided by Google (Mikolov et al., 2013), and for greater number of d s, we train word2vec on unlabeled data, see Table 1.",
"In addition, we set the dimensionality reduction parameter m from { 1 . . . 4 } using validation data.",
"The best value of m is the same across tasks/datasets, m = 2 .",
"Furthermore, we determine the number of clusters k for VLAEs by choosing the optimal k from { 2 i , i = { 1 . . . 7 }} using validation data of each dataset.",
"We learn optimal k with respect to task, but not embedding space, due to significant density of the semantic space of word embeddings, see Note on Clustering Word Embeddings.",
"Baselines: We consider two versions of DANs as baselines: Avg small and Avg large which represent documents by average word embedding of size d = 300 and d = m 300 respectively.",
"Note that, for fair comparison, Avg large has the exact same size as our model ( VLAE ); however, depending on m , their network size is 1 .",
"3 1 .",
"6 times greater than that of Avg small due to difference in input dimensions.",
"We use 3 hidden layers of size 300 for above networks.",
"Also, to directly evaluate the effect of averaging, we do not adjust initial word embeddings during training.",
"Experimental Results: Table 2 shows the performance of different models across datasets.",
"The results show that VLAE significantly outperforms Avg small and Avg large by 2 .",
"6 and 7 .",
"2 points in Macro-F1 on IMDb.",
"The corresponding values on Reddit dataset are 6 .",
"7 and 3 .",
"4 points respectively.",
"We believe these improvements are due to more effective and lossless representation of inputs.",
"We note that Avg large performs worse than Avg small on IMDb.",
"This could be attributed to the size of training data which may not be enough to train Avg large , or to lower quality of input representations in Avg large compared to Avg small in case of IMDb.",
"Note that although VLAE has the same number of parameters as Avg large , it uses autoencoding to effectively filter redundant information.",
"Verify-Avg small Avg large VLAE IMDb 83.11 78.52 85.72* Reddit 59.42 62.72 66.10* Twitter 61.42 73.08* 72.62 AVG 67.98 71.44 74.81 Table 2: Macro-F1 performance of different models across datasets.",
"ing these hypotheses will be the subject of future work.",
"In addition, VLAE show lower performance than Avg large on Twitter dataset, F1 of 72 .",
"62 versus 73 .",
"08 .",
"We attribute this result to the shorter length of tweets for which, as we experimentally showed before, averaging does not cause major divergence in representations.",
"On average, VLAE improves Avg small and Avg large by 4 .",
"7 and 5 .",
"3 F1 points on IMDb and Reddit (longer texts) respectively.",
"It also shows comparable performance to best performing model on Twitter (shorter texts).",
"We also compare models in terms of the quality of their representations.",
"For this comparison, we ignore input preparation time and assume a model that generates better representations should converge faster than other models; note that the overall turnaround time of VLAE is greater than that of Avg small or Avg large because of its input preparation time which we ignore for the purpose of this experiment.",
"The result show that VLAE leads to 7 .",
"5 , 1 .",
"3 , and 1 .",
"3 times faster convergence than Avg small and 14 .",
"9 , 2 .",
"6 , and 1 .",
"8 times faster convergence than Avg large on IMDb, Reddit, and Twitter datasets respectively.",
"Considering the size of these networks, these results indicate that representations obtained from VLAE are much better than those of its counterparts.",
"Note on Clustering Word Embeddings: In experiments, we observe clusters obtained from word embeddings are often very dense.",
"This is a challenge for our model because with small number of clusters ( k s) potentially dissimilar words can appear in the same cluster, while with large k s semantically-similar words may appear in different clusters.",
"Neither of these are desired.",
"To illustrate the above challenge, we report Silhouette Coefficient (SC) (Rousseeuw, 1987) of k-means with different number of clusters obtained from words embeddings across datasets.",
"SC indicates how well cluster boundaries are detected 2 4 8 16 32 64 128 #cluster 0.000 0.025 0.050 0.075 0.100 0.125 0.150 S il h o u e tt e C o e ff i c i e n t IMDB Reddit Figure 3: Mean Silhouette Coefficient computed for different number of clusters; a higher Silhouette Coefficient score indicates better defined clusters.",
"by a clustering model.",
"It is calculated using the mean intra-cluster distance and the mean nearest-cluster distance for each sample.",
"Specifically, the mean distance between each embedding and all other embeddings in the same cluster ( mc ), and the mean distance between the embedding and all other embeddings in the next nearest cluster (the nearest cluster that the embedding is not part of) ( mn ) are used to measure SC for the embedding: mn mc max( mn, mc ) .",
"The best and worst SC scores are 1 and 1 which indicate ideal and worst clustering respectively.",
"Also, values near 0 indicate overlapping clusters.",
"Figure 3 shows the mean SCs computed over all word embeddings for IMDb and Reddit datasets.",
"2 The results show that",
"(a): the best number of clusters is k = 2 on both datasets, and",
"(b): Silhouette Coefficient scores generally home in on values close to zero as the number of clusters increases.",
"These results show significant density of embeddings in semantic space.",
"Therefore, we optimize the number of clusters for creating VLAE s by resorting to validation data and measuring task-specific performance.",
"From these results, we conclude that a hierarchical clustering approach that recursively combines pairs of semantically-similar clusters could help better defining these clusters and perhaps improve the performance of our model.",
"Deep averaging networks (DANs) (Joulin et al., 2017; Iyyer et al., 2015; Arora et al., 2017; Shen",
"et al., 2018) were developed based on the successes of vector operations in embedding space.",
"In contrast to their simplicity, DANs showed high performance in text classification tasks.",
"Arora et al. (2017) showed that sentences can be effectively represented by the weighted average of their word embeddings modified by PCA/SVD.",
"In addition, the DANs developed in (Iyyer et al., 2015), (Joulin et al., 2017), and (Shen et al., 2018) were feed-forward networks that used average word embeddings to represent inputs; they were effective for several NLP tasks such as document categorization, text pair similarity, and short sentence classification.",
"Furthermore, feed-forward architectures like DANs have been used for language modeling (Bengio et al., 2003) and greedy transition-based dependency parsing (Chen and Manning, 2014) with fast turnaround time.",
"In addition, previous research investigated a variety of vector operations that could replace the averaging operation used in the DANs.",
"Many of these operations have been studied in (Mitchell and Lapata, 2008) for modeling the composition-ality of short phrases, or showing the utility of simple vector computations(Banea et al., 2014).",
"The operations in (Mitchell and Lapata, 2008) were also extended to use syntactic relation between words and grammar (Erk and Pado, 2008; Collobert and Weston, 2008).",
"Also, clustering semantic space was studied in (Mekala et al., 2017) to learn context information for words and for tasks like topic coherence and information retrieval.",
"In this work, we built on previous work on DANs and investigated and tackled information loss associated with average word embeddings.",
"We investigate information loss associated with average word embeddings.",
"We show that averaging lead to significant information loss and propose to tackle the issue by identify semantically-similar parts of documents through clustering of semantic space at word-level and integrating cluster-level representations through autoencoding.",
"A promising future direction is to use hierarchical clustering to create better cluster-level representations.",
"We thank anonymous reviewers for their insightful comments and constructive feedback."
] | [
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"abstain",
"other"
] |
[
"In this paper, we propose a variational approach to weakly supervised document-level multi-aspect sentiment classification.",
"Instead of using user-generated ratings or annotations provided by domain experts, we use target-opinion word pairs as supervision.",
"These word pairs can be extracted by using dependency parsers and simple rules.",
"Our objective is to predict an opinion word given a target word while our ultimate goal is to learn a sentiment polarity classifier to predict the sentiment polarity of each aspect given a document.",
"By introducing a latent variable, i.e., the sentiment polarity, to the objective function, we can inject the sentiment polarity classifier to the objective via the variational lower bound.",
"We can learn a sentiment polarity classifier by optimizing the lower bound.",
"We show that our method can outperform weakly supervised baselines on TripAdvisor and BeerAdvocate datasets and can be comparable to the state-of-the-art supervised method with hundreds of labels per aspect.",
"Document-level multi-aspect sentiment classification (DMSC) aims to predict the sentiment polarity of each aspect given a document which consists of several sentences describing one or more aspects (Wang et al., 2010, 2011; Yin et al., 2017).",
"Solving the DMSC task is useful for providing both recommendations for users and suggestions for business owners on customer review platforms.",
"Aspect based sentiment classification (Tang et al., 2016a,b; Wang et al., 2016b; Chen et al., 2017; Ma et al., 2017; Wang et al., 2018) was usually done by supervised learning, where aspect-level annotations should be provided.",
"Aspect-level annotations are not easy to obtain.",
"Even when the platform provides the function to rate for different aspects, users are less likely to submit all of them.",
"For example, about 37% of the aspect ratings are missing on TripAdvisor.",
"If we can solve DMSC task without using aspect-level annotations, it can save human effort to annotate data or collect user-generated annotations on the platform.",
"Existing weakly supervised approaches (Wang et al., 2010, 2011) use overall polarities instead of aspect polarities as supervision.",
"Compared with the polarity of each aspect, it is relatively easy to obtain overall polarities.",
"Specifically, they minimize the square loss between the overall polarity and the weighted sum of all aspect polarities.",
"However, when users only care about a particular rare aspect, e.g., childcare services, these approaches cannot estimate parameters of the rare aspect incrementally.",
"They have to re-collect documents which mentioned this rare aspect and estimate parameters of all aspects based on the new corpus.",
"In addition, these approaches assume the document is a bag-of-words, which neglects the order of the words and fails to capture the similarity between words.",
"In this paper, we propose to use target-opinion word pairs as supervision.",
"Target-opinion word pairs can be helpful with our ultimate goal which is to learn a classifier to predict the sentiment polarity of each aspect given a document.",
"For example, in a document The bedroom is very spa-cious, if we can extract the target-opinion pair bedroom-spacious, the sentiment polarity of the aspect room is likely to be positive .",
"Hence, we propose to achieve the polarity classification goal by accomplishing another relevant objective: to predict an opinion word given a target word.",
"We can decompose the opinion word prediction objective into two sub-tasks.",
"The first sub-task is to predict the sentiment polarity based on a document.",
"For example, given a document The bedroom is very spacious, it predicts the sentiment polarity of the aspect room to be positive .",
"The second sub-task is to predict the opinion word given a target word and a sentiment polarity predicted by the first sub-task.",
"For example, knowing the fact that the sentiment polarity of the aspect room is positive , it predicts the opinion word associated with the target word room to be spacious.",
"By introducing a latent variable, i.e., the sentiment polarity of an aspect, to the opinion word prediction objective, we can inject the polarity classification goal (the first sub-task) into the objective via the variational lower bound which also incorporates the second sub-task.",
"In this sense, our training objective is only based on the target-opinion word pairs which can be extracted by using dependency parsers and some manually designed rules.",
"We consider our approach as weakly supervised learning because there is no direct supervision from polarity of each aspect.",
"In other words, our model includes two classifiers: a sentiment polarity classifier and an opinion word classifier.",
"In the sentiment polarity classifier, it predicts the sentiment polarity given a document.",
"In the opinion word classifier, it predicts an opinion word based on a target word and a sentiment polarity.",
"Compared with previous approaches (Wang et al., 2010, 2011), our approach can get rid of the assumption that the overall polarity should be observed and it is a weighted sum of all aspect polarities.",
"Moreover, our approach can estimate parameters of a new aspect incrementally.",
"In addition, our sentiment polarity classifier can be more flexible to capture dependencies among words beyond the bag-of-words representation if we use a deep neural network architecture to extract features to represent a document.",
"We conducted experiments on two datasets, TripAdvisor (Wang et al., 2010) and BeerAdvocate (McAuley et al., 2012), to illustrate the effectiveness of our approach.",
"Our contributions are summarized as follows, We propose to solve DMSC task in a nearly unsupervised way.",
"We propose to learn a classifier by injecting it into another relevant objective via the variational lower bound.",
"This framework is flexible to incorporate different kinds of document representations and relevant objectives.",
"We show promising results on two real datasets and we can produce comparable results to the supervised method with hundreds of labels per aspect.",
"Code and data for this paper are available on https://github.com/HKUST-KnowComp/ VWS-DMSC.",
"In this section, we describe our variational approach to weakly supervised DMSC (VWS-DMSC).",
"In the next section, we present how we obtain target-opinion word pairs by using a rule-based extraction approach.",
"Our model consists of a sentiment polarity classifier and an opinion word classifier.",
"Our task is document-level multi-aspect sentiment classification.",
"For each aspect, we train a sentiment polarity classifier and an opinion word classifier.",
"The input of the sentiment polarity classifier of each aspect is the same, i.e., a representation of a document.",
"The target-opinion word pairs used in opinion word classifiers are different for different aspects.",
"Figure 1 shows the relation between two classifiers (on the aspect price ).",
"The input x of the sentiment polarity classifier is a representation of a document, e.g., bag-of-words or a representation learned by recurrent neural networks.",
"The sentiment polarity classifier takes x as input and produces a distribution of sentiment polarity R a of an aspect a , denoted as q ( R a | x ) .",
"If R a only has two possible values, i.e., positive and negative, then outputs of the classifier are q ( positive | x ) and q ( negative | x ) respectively.",
"The opinion word classifier takes a target word (price) and a possible value of the sentiment polarity r a as input, and estimates p ( good | r a , price ) .",
"Our training objective is to maximize the log-likelihood of an opinion word given a target word, e.g., p ( good | price ) .",
"The likelihood is estimated based on the sentiment polarity classifier and the opinion word classifier.",
"The sentiment polarity classifier aims to estimate a distribution of sentiment polarity q ( R a | x ) , where R a is a discrete random variable representing the sentiment polarity and x is a feature representation of a document.",
"We use a simple Softmax classifier here.",
"We denote r a as a possible value of the random variable R a , representing a possible sentiment polarity.",
"The model estimates the probability of class r a as q ( R a = r a | x ) = exp (cid:0) w Tr a x (cid:1) (cid:80) r (cid:48) a exp (cid:0) w Tr (cid:48) a x (cid:1) , (1) where w r a is a vector associated with sentiment class r a for aspect a .",
"Document Representation The representation of a document x can be different using different feature extraction approaches.",
"Traditional document representations of sentiment classification would be bag-of-words, n-gram, or averaged word embeddings.",
"Recently, end-to-end recurrent neural network based models demonstrate a powerful capacity to extract features of a document.",
"The state-of-the-art model in DMSC task is (Yin et al., 2017).",
"We use it as the document representation in our model.",
"The opinion word classifier aims to estimate the probability of an opinion word w o given a target word w t and a sentiment polarity r a :",
"where is a scoring function related to opinion word w o , target word w t , and sentiment polarity r a .",
"Here we use the dot product as the scoring function: ( w o , w t , r a ) = I (cid:0) ( w t , w o ) P , w t K a (cid:1) c Tr a w o , (3) where w o is the word embedding of opinion word w o , c r a is a vector associated with r a , P is the set of pairs extracted from the document, K a is the set of target words associated with aspect a , and I ( ) is an indicator function where I ( true ) = 1 and I ( false ) = 0 .",
"Given a target word w t and a sentiment polarity r a , we aim to maximize the probability of opinion words highly related to them.",
"For example, opinion word good is usually related to target word price for aspect value with sentiment polarity positive , and opinion word terrible is usually related to target word traffic for aspect location with sentiment polarity negative .",
"The objective function is to maximize the log-likelihood of an opinion word w o given a target word w t .",
"As we mentioned before, the objective function can be decomposed into two sub-tasks.",
"The first one corresponds to the sentiment polarity classifier.",
"The second one corresponds to the opinion word classifier.",
"After introducing a latent variable, i.e., the sentiment polarity, to the objective function, we can derive a variational lower bound of the log-likelihood which can incorporate two classifiers: L = log p ( w o | w t ) = log (cid:88) r a p ( w o , r a | w t ) = log (cid:88) r a q ( r a | x ) (cid:104) p ( w o , r a | w t ) q ( r a | x ) (cid:105) (cid:88) r a q ( r a | x ) (cid:104) log p ( w o , r a | w t ) q ( r a | x ) (cid:105) = E q ( R a | x ) (cid:2) log p ( w o | r a , w t ) p ( r a | w t ) (cid:3) + H ( q ( R a | x )) = E q ( R a | x ) (cid:2) log p ( w o | r a , w t ) p ( r a ) (cid:3) + H ( q ( R a | x )) , (4) where H ( ) refers to the Shannon entropy.",
"By applying Jensen's inequality, the log-likelihood is lower-bounded by Eq.",
"(4).",
"The equality holds if and only if the KL-divergence of two distributions, q ( R a | x ) and p ( R a | w t , w o ) , equals to zero.",
"Maximizing the variational lower bound is equivalent to minimizing the KL-divergence.",
"Hence, we can learn a sentiment polarity classifier which can produce a similar distribution to the true posterior p ( R a | w t , w o ) .",
"Compared with p ( R a | w t , w o ) , q ( R a | x ) is more flexible since it can take any kind of feature representations as input.",
"We assume that a target word w t and a sentiment polarity r a are in-dependent since the polarity assignment is not in-fluenced by the target word.",
"We also assume that the sentiment polarity R a follows a uniform distribution, which means p ( r a ) is a constant.",
"We remove it in Eq.",
"(4) to get a new objective function as follows: E q ( R a | x ) [log p ( w o | r a , w t )] + H ( q ( R a | x )) .",
"The partition function of Eq.",
"(2) requires the summation over all opinion words in the vocabulary.",
"However, the size of opinion word vocabulary is large, so we use the negative sampling technique (Mikolov et al., 2013) to approximate Eq.",
"(2).",
"Specifically, we substitute log p ( w o | r a , w t ) in the objective (5) with the following objective function: log (cid:0) ( w o , w t , r a ) (cid:1) + (cid:88) w (cid:48) o N log (cid:0) ( w (cid:48) o , w t , r a ) (cid:1) , (6) where w (cid:48) o is a negative sample of opinion words in the vocabulary, N is the set of negative samples and is the sigmoid function.",
"Then our final objective function is rewritten as: E q ( R a | x ) (cid:2) log (cid:0) ( w o , w t , r a ) (cid:1) + (cid:88) w (cid:48) o N log (cid:0) ( w (cid:48) o , w t , r a ) (cid:1)(cid:3) + H ( q ( R a | x )) , (7) where is a hyper-parameter which can adjust the expectation and entropy terms into the same scale (Marcheggiani and Titov, 2016).",
"Target-opinion word pairs extraction is a well studied problem (Hu and Liu, 2004; Popescu and Etzioni, 2005; Bloom et al., 2007; Qiu et al., 2011).",
"We designed five rules to extract potential target-opinion word pairs.",
"Our method relies on Stanford Dependency Parser (Chen and Manning, 2014).",
"We describe our rules as follows.",
"Rule 1 : We extract pairs satisfying the grammatical relation amod (adjectival modi-fier) (De Marneffe and Manning, 2008).",
"For example, in phrase very good price, we extract price and good as a target-opinion pair.",
"Rule 2 : We extract pairs satisfying the grammatical relation nsubj (nominal subject), and the Dataset TripAdvisor BeerAdvocate # docs 28,543 27,583 # target words 3,737 3,088 # opinion words 12,406 9,166 # pairs from R1 208,676 249,264 # pairs from R2 82,944 28,505 # pairs from R3 2,241 1,092 # pairs from R4 2,699 6,812 # pairs from R5 16,537 55,825 Table 1: Statistics of extracted target-opinion pairs .",
"head word is an adjective and the tail word is a noun.",
"For example, in a sentence The room is small, we can extract room and small as a target-opinion pair.",
"Rule 3 : Some verbs are also opinion words and they are informative.",
"We extract pairs satisfying the grammatical relation dobj (direct object) when the head word is one of the following four words: like, dislike, love, and hate.",
"For example, in the sentence I like the smell, we can extract smell and like as a target-opinion pair.",
"Rule 4 : We extract pairs satisfying the grammatical relation xcomp (open clausal comple-ment), and the head word is one of the following word: seem,look, feel, smell, and taste.",
"For example, in the sentence This beer tastes spicy, we can extract taste and spicy as a target-opinion pair.",
"Rule 5 : If the sentence contains some adjectives that can implicitly indicate aspects, we manually assign them to the corresponding aspects.",
"According to (Lakkaraju et al., 2014), some adjectives serve both as target words and opinion words.",
"For example, in the sentence very tasty, and drinkable, the previous rules fail to extract any pair.",
"But we know it contains a target-opinion pair, i.e., taste-tasty.",
"Most of these adjectives have the same root form with the aspects they indicated, e.g., clean (cleanliness), and overpriced (price).",
"This kind of adjective can be extracted first and then we can obtain more similar adjectives using word similarities.",
"For example, given tasty, we could get flavorful by retrieving similar words.",
"Table 1 shows the statistics of the rule-based extraction on our two datasets.",
"The first four rules can be applied to any dataset while the last one is domain dependent which requires human effort to identify these special adjectives.",
"In practice, rule 5 can be removed to save human effort.",
"The effect of removing rule 5 is shown in experiments.",
"After extracting potential target-opinion word pairs, we need to assign them to different aspects as supervision signals.",
"We select some seed words to describe each aspect, and then calculate similarities between the extracted target (or opinion) word and seed words, and assign the pair to the aspect where one of its seed words has the highest similarity.",
"The similarity we used is the cosine similarity between two word embeddings trained by word2vec (Mikolov et al., 2013).",
"For example, suppose seed words { room, bed } and { business, Internet } are used to describe the aspect room and business respectively, and the candidate pair pillow soft will be assigned to the aspect room if the similarity between pillow and bed is highest among all combinations.",
"We evaluate our model on TripAdvisor (Wang et al., 2010) and BeerAdvocate (McAuley et al., 2012; Lei et al., 2016; Yin et al., 2017) datasets, which contain seven aspects (value, room, location, cleanliness, check in/front desk, service, and business) and four aspects (feel, look, smell, and taste) respectively.",
"We run the same preprocessing steps as (Yin et al., 2017).",
"Both datasets are split into train/development/test sets with proportions 8:1:1.",
"All methods can use development set to tune their hyper-parameters.",
"Ratings of TripAdvisor and BeerAdvocate datasets are on scales of 1 to 5 and 0 to 5 respectively.",
"But in BeerAdvocate, 0 star is rare, so we treat the scale as 1 to 5 .",
"We convert original scales to binary scales as follows: 1 and 2 stars are treated as negative, 3 is ignored, and 4 and 5 stars are treated as positive.",
"In BeerAdvocate, most reviews have positive polarities, so to avoid the unbalanced issue, we perform data selection according to overall polarities.",
"After data selection, the number of reviews with negative overall polarities and that with positive overall polarities are equal.",
"polarities in training sets as predictions.",
"Lexicon means using an opinion lexicon to assign sentiment polarity to an aspect (Read and Carroll, 2009; Pablos et al., 2015).",
"We combine two popular opinion lexicons used by Hu and Liu (2004) and Wilson et al. (2005) to get a new one.",
"If an opinion word from extracted pairs is in positive (negative) lexicon, it votes for positive (negative).",
"When the opinion word is with a negation word, its polarity will be flipped.",
"Then, the polarity of an aspect is determined by using majority voting among all opinion words associated with the aspect.",
"When the number of positive and negative words is equal, we adopt two different ways to resolve it.",
"For Lexicon-R , it randomly assigns a polarity.",
"For Lexicon-O , it uses the overall polarity as the prediction.",
"Since overall polarities can also be missing, for both Lexicon-R and Lexicon-O, we randomly assign a polarity in uncertain cases and report both mean and std based on five trials of random assignments.",
"Assign-O means directly using the overall polarity of a review in the development/test sets as the prediction for each aspect.",
"LRR assumes the overall polarity is a weighted sum of the polarity of each aspect (Wang et al., 2010).",
"LRR can be regarded as the only existing weakly supervised baseline where both algorithm and source code are available.",
"BoW-DMSC-A is a simple softmax classifier using all annotated training data where the input is a bag-of-words feature vector of a document.",
"N-DMSC-A is the state-of-the-art neural network based model (Yin et al., 2017) ( N-DMSC ) in DMSC task using all annotated training data, which serves an upper bound to our method.",
"N-DMSC-O is to use overall polarities as supervision to train an N-DMSC and apply it to the classification task of each aspect at the test time.",
"N-DMSC{ 50,100,200,500,1000 } is the N-DMSC algorithm using partial data.",
"In order to see our method is comparable to supervised methods using how many labeled data, we use { 50 , 100 , 200 , 500 , 1000 } annotations of each aspect to train N-DMSC and compare them to our method.",
"In addition to annotated data for training, there are extra 20% annotated data for validation.",
"Since the sampled labeled data may vary for different trials, we perform five trials of random sampling and report both mean and std of the results.",
"For our method, denoted as VWS-DMSC , the document representation we used is obtained from N-DMSC (Yin et al., 2017).",
"They proposed a novel hierarchical iterative attention model in which documents and pseudo aspect related questions are interleaved at both word and sentence-level to learn an aspect-aware document representation.",
"The pseudo aspect related questions are represented by aspect related keywords.",
"In order to benefit from their aspect-aware representation scheme, we train an N-DMSC to extract the document representation using only overall polarities.",
"In the iterative attention module, we use the pseudo aspect related keywords of all aspects released by Yin et al. (2017).",
"One can also use document-to-document autoencoders (Li et al., 2015) to generate the document representation.",
"In this way, our method can get rid of using overall polarities to generate the document representation.",
"Hence, unlike LRR, it is not necessary for our method to use overall polarities.",
"Here, to have a fair comparison with LRR, we use the overall polarities to generate document representation.",
"For our method, we do not know which state is positive and which one is negative at training time, so the Hungarian algorithm (Kuhn, 1955) is used to resolve the assignment problem at the test time.",
"We show all results in Table 2, which consists of three blocks, namely, unsupervised, weakly supervised, and supervised methods.",
"For unsupervised methods, our method can outperform majority on both datasets consistently.",
"But other weakly supervised methods cannot outperform majority on BeerAdvocate dataset, which shows these baselines cannot handle unbalanced data well since BeerAdvocate is more unbalanced than TripAdvisor.",
"Our method outperforms Lexicon-R and Lexicon-O, which shows that predicting an opinion word based on a target word may be a better way to use target-opinion pairs, compared with performing a lexicon lookup using opinion words from extract pairs.",
"Good performance of Lexicon-O and Assign-O demonstrates the usefulness of overall polarities in develop-ment/test sets.",
"N-DMSC-O trained with the overall polarities cannot outperform Assign-O since N-DMSC-O can only see overall polarities in training set while Assign-O can see overall polarities for both development and test sets and does not involve learning and generalization.",
"For weakly supervised methods, LRR is the only open-source baseline in the literature on weakly supervised DMSC, and our method outperforms LRR by 6 % and 16 % on TripAdvisor and BeerAdvocate datasets.",
"N-DMSC-O can also be considered as a weakly supervised method be-Dataset TripAdvisor BeerAdvocate Rule DEV TEST DEV TEST R1 0.7215 0.7174 0.7220 0.7216 R2 0.7172 0.7180 0.6864 0.6936 R3 0.6263 0.6187 0.6731 0.6725 R4 0.6248 0.6279 0.6724 0.6717 R5 0.5902 0.5856 0.7095 0.7066 R1 0.7538 0.7481 0.7458 0.7474 R2 0.7342 0.7368 0.7504 0.7529 R3 0.7418 0.7397 0.7565 0.7558 R4 0.7424 0.7368 0.7518 0.7507 R5 0.7448 0.7440 0.7550 0.7548 All 0.7577 0.7561 0.7502 0.7538 Table 3: Averaged accuracies on DMSC.",
"cause it only uses overall polarities as supervi-sion, and we still outperform it significantly.",
"It is interesting that LRR is worse than N-DMSC-O.",
"We guess that assuming that the overall polarity is a weighted sum of all aspect polarities may not be a good strategy to train each aspect's polarity or the document representation learned by N-DMSC is better than the bag-of-words representation.",
"For supervised block methods, BoW-DMSC-A and N-DMSC-A are both supervised methods using all annotated data, which can be seen as the upper bound of our algorithm.",
"N-DMSC-A outperforms BoW-DMSC-A, which shows that the document representation based on neural network is better than the bag-of-words representation.",
"Hence, we use the neural networks based document representation as input of the sentiment polarity classifier.",
"Our results are comparable to N-DMSC-200 on TripAdvisor and N-DMSC-100 on BeerAdvocate.",
"To evaluate effects of extracted rules, we performed an ablation study.",
"We run our algorithm VWS-DMS with each rule kept or removed over two datasets.",
"If no pairs extracted for one aspect in training set, the accuracy of this aspect will be 0.5, which is a random guess.",
"From the Table 3 we can see that, the rule R1 is the most effective rule for both datasets.",
"Rules R3/R4/R5 are less effective on their own.",
"However, as a whole, they can still improve the overall performance.",
"When considering removing each of rules, we found that our algorithm is quite robust, which indicates miss-Figure 2: Parameter sensitivity analysis.",
"ing one of the rules may not hurt the performance much.",
"Hence, if human labor is a major concern, rule 5 can be discarded.",
"We found that sometimes removing one rule may even result in better accuracy (e.g., -R3 for BeerAdvocate dataset).",
"This means this rule may introduce some noises into the objective function.",
"However, -R3 can result in worse accuracy for TripAdvisor, which means it is still complementary to the other rules for this dataset.",
"We also conduct parameter sensitivity analysis of our approach.",
"The parameter in Equation (7) adjusts the expectation and entropy terms on the same scale.",
"We test = { 0 , 0 .",
"01 , 0 .",
"1 , 1 } for both of the datasets.",
"As we can see from Figure 2, = 0 .",
"1 is a good choice for both datasets.",
"We implemented our models using TensorFlow (Abadi et al., 2016).",
"For N-DMSC and LRR, we used code released by Yin et al. (2017) and Wang et al. (2010) respectively and followed their preprocessing steps and optimal settings.",
"Parameters are updated by using ADADELTA (Zeiler, 2012), an adaptive learning rate method.",
"To avoid overfitting, we impose weight decay and drop out on both classifiers.",
"The regularization coefficient and drop out rate are set to 10 3 and 0 .",
"3 respectively.",
"The number of negative samples and in our model are set to 10 and 0 .",
"1 respectively.",
"For each document and each aspect, multiple target-opinion pairs are extracted.",
"The opinion word classifier associated with an aspect will predict five target-opinion pairs at a time.",
"These five target-opinion pairs are selected with bias.",
"The probability of a pair being selected is proportional to the frequency of the opinion word to the power of 0 .",
"25 .",
"In this way, opinion words with low frequency are more likely to be selected compared to the uniform sampling.",
"In order to initialize both classifiers better, the word embeddings are retrofitted (Faruqui et al., 2015) using PPDB (Gan-itkevitch et al., 2013) semantic lexicons.",
"In this section, we review the related work on document-level multi-aspect sentiment classification, target-opinion word pairs extraction, and variational methods.",
"Document-level Multi-Aspect Sentiment Classification.",
"Wang et al. (2010) proposed a LRR model to solve this problem.",
"LRR assumes the overall polarity is a weighted sum of all aspect polarities which are represented by word frequency features.",
"LRR needs to use aspect keywords to perform sentence segmentation to generate the representation of each aspect.",
"To address the limitation of using aspect keywords, LARAM (Wang et al., 2011) assumes that the text content describing a particular aspect is generated by sampling words from a topic model corresponding to the latent aspect.",
"Both LRR and LARAM can only access to overall polarities in the training data, but not gold standards of aspect polarities.",
"Meng et al. (2018) proposed a weakly supervised text classification method which can take label surface names, class-related keywords, or a few labeled documents as supervision.",
"Ramesh et al. (2015) developed a weakly supervised joint model to identify aspects and the corresponding sentiment polarities in online courses.",
"They treat aspect (sentiment) related seed words as weak supervision.",
"In the DMSC task which is a fine-grained text classification task, the label surface names or keywords for some aspects would be very similar.",
"Given that the inputs are the same and the supervisions are similar, weakly supervised models cannot distinguish them.",
"So we do not consider them as our baselines.",
"Yin et al. (2017) modeled this problem as a machine comprehension problem under a multi-task learning framework.",
"It also needs aspect keywords to generate aspect-aware document representations.",
"Moreover, it can access gold standards of aspect polarities and achieved state-of-the-art performance on this task.",
"Hence, it can serve as an upper bound.",
"Some sentence-level aspect based sentiment classification methods (Wang et al., 2016b, 2018) can be directly applied to the DMSC task, because they can solve aspect category sentiment classification task.",
"For example, given a sentence the restaurant is ex-pensive, the aspect category sentiment classification task aims to classify the polarity of the aspect category price to be negative .",
"The aspect categories are predefined which are the same as the DMSC task.",
"Some of them (Tang et al., 2016a,b; Chen et al., 2017; Ma et al., 2017) cannot because they are originally designed for aspect term sentiment classification task.",
"For example, given a sentence I loved their fajitas, the aspect term sentiment classification task aims to classify the polarity of the aspect term fajitas to be positive .",
"The aspect terms appearing in the sentence should be provided as inputs.",
"Target Opinion Word Pairs Extraction.",
"There are two kinds of methods, namely, rule based methods and learning based methods to solve this task.",
"Rule based methods extract target-opinion word pairs by mining the dependency tree paths between target words and opinion words.",
"Learning based methods treat this task as a sequence labeling problem, mapping each word to one of the following categories: target, opinion, and other.",
"(Hu and Liu, 2004) is one of earliest rule based methods to extract target-opinion pairs.",
"An opinion word is restricted to be an adjective.",
"Target words are extracted first, and then an opinion word is linked to its nearest target word to form a pair.",
"Popescu and Etzioni (2005) and Bloom et al. (2007) manually designed dependency tree path templates to extract target-opinion pairs.",
"If the path between a target word candidate and an opinion word candidate belongs to the set of path templates, the pair will be extracted.",
"Qiu et al. (2011) identified dependency paths that link opinion words and targets via a bootstrapping process.",
"This method only needs an initial opinion lexicon to start the bootstrapping process.",
"Zhuang et al. (2006) adopted a supervised learning algorithm to learn valid dependency tree path templates, but it requires target-opinion pairs annotations.",
"Learning based methods require lots of target-opinion pairs annotations.",
"They trained conditional random fields (CRF) (Lafferty et al., 2001) based models (Jakob and Gurevych, 2010; Yang and Cardie, 2012; Wang et al., 2016a) or deep neural networks (Liu et al., 2015; Wang et al., 2017; Li and Lam, 2017) to predict the label (target, opinion or other) of each word.",
"Jakob and Gurevych (2010) and Li et al. (2012) extracted target-opinion pairs without using using any labeled data in the domain of interest, but it needs lots of labeled data in another related domain.",
"In this paper, we only use very simple rules to extract target-opinion pairs to validate the effectiveness of our approach.",
"If better pairs can be extracted, we can further improve our results.",
"Variational Methods.",
"Variational autoencoders (Kingma and Welling, 2014; Rezende et al., 2014) (VAEs) use a neural network to parameterize a probability distribution.",
"VAEs consists of an encoder which parameterizes posterior probabilities and a decoder which parameterizes the reconstruction likelihood given a latent variable.",
"VAEs inspire many interesting works (Titov and Khoddam, 2015; Marcheggiani and Titov, 2016; Suster et al., 2016; Zhang et al., 2018; Chen et al., 2018) which are slightly different from VAEs.",
"Their encoders produce a discrete distribution while the encoder in VAEs yields a continuous latent variable.",
"Titov and Khoddam (2015) aimed to solve semantic role labeling problem.",
"The encoder is essentially a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features.",
"The decoder reconstructs argument fillers given predicted roles.",
"Marcheggiani and Titov (2016) aimed to solve unsupervised open domain relation discovery.",
"The encoder is a feature-rich relation extractor, which predicts a semantic relation between two entities.",
"The decoder reconstructs entities relying on the predicted relation.",
"Suster et al. (2016) tried to learn multi-sense word embeddings.",
"The encoder uses bilingual context to choose a sense for a given word.",
"The decoder predicts context words based on the chosen sense and the given word.",
"Zhang et al. (2018) aimed to solve knowledge graph powered question answering.",
"Three neural networks are used to parameterize probabilities of a topic entity given a query and an answer, an answer based on a query and a predicted topic, and the topic given the query.",
"Chen et al. (2018) aimed to infer missing links in a knowledge graph.",
"Three neural networks are used to parameterize probabilities of a latent path given two entities and a relation, a relation based on two entities and the chosen latent path, and the relation given the latent path.",
"In this paper, we propose a variational approach to weakly supervised DMSC.",
"We extract many target-opinion word pairs from dependency parsers using simple rules.",
"These pairs can be supervision signals to predict sentiment polarity.",
"Our objective function is to predict an opinion word given a target word.",
"After introducing the sentiment polarity as a latent variable, we can learn a sentiment polarity classifier by optimizing the variational lower bound.",
"We show that we can outperform weakly supervised baselines by a large margin and achieve comparable results to the supervised method with hundreds of labels per aspect, which can reduce a lot of labor work in practice.",
"In the future, we plan to explore better target-opinion word extraction approaches to find better supervision signals.",
"This paper was supported by the Early Career Scheme (ECS, No. 26206717) from Research Grants Council in Hong Kong.",
"Ziqian Zeng has been supported by the Hong Kong Ph.D.",
"Fellowship.",
"We thank Intel Corporation for supporting our deep learning related research.",
"We also thank the anonymous reviewers for their valuable comments and suggestions that help improve the quality of this manuscript."
] | [
"objective",
"method",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"objective",
"objective",
"abstain",
"result",
"other",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"result",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"In this work, we present a hybrid learning method for training task-oriented dialogue systems through online user interactions.",
"Popular methods for learning task-oriented dialogues include applying reinforcement learning with user feedback on supervised pretraining models.",
"Efficiency of such learning method may suffer from the mismatch of dialogue state distribution between offline training and online interactive learning stages.",
"To address this challenge, we propose a hybrid imitation and reinforcement learning method, with which a dialogue agent can effectively learn from its interaction with users by learning from human teaching and feedback.",
"We design a neural network based task-oriented dialogue agent that can be optimized end-to-end with the proposed learning method.",
"Experimental results show that our end-to-end dialogue agent can learn effectively from the mistake it makes via imitation learning from user teaching.",
"Applying reinforcement learning with user feedback after the imitation learning stage further improves the agent's capability in successfully completing a task.",
"Task-oriented dialogue systems assist users to complete tasks in specific domains by understanding user's request and aggregate useful information from external resources within several dialogue turns.",
"Conventional task-oriented dialogue systems have a complex pipeline (Rudnicky et al., 1999; Raux et al., 2005; Young et al., 2013) consisting of independently developed and modularly connected components for natural language understanding (NLU) (Mesnil et al., 2015; Liu and Lane, 2016; Hakkani-Tur et al., 2016), dialogue state tracking (DST) (Henderson et al., 2014c; Work done while the author was an intern at Google. Work done while at Google Research. Mrksic et al., 2016), and dialogue policy learning (Gasic and Young, 2014; Shah et al., 2016; Su et al., 2016, 2017).",
"These system components are usually trained independently, and their optimization targets may not fully align with the overall system evaluation criteria (e.g. task success rate and user satisfaction).",
"Moreover, errors made in the upper stream modules of the pipeline propagate to downstream components and get amplified, making it hard to track the source of errors.",
"To address these limitations with the conventional task-oriented dialogue systems, recent efforts have been made in designing end-to-end learning solutions with neural network based methods.",
"Both supervised learning (SL) based (Wen et al., 2017; Bordes and Weston, 2017; Liu and Lane, 2017a) and deep reinforcement learning (RL) based systems (Zhao and Es-kenazi, 2016; Li et al., 2017; Peng et al., 2017) have been studied in the literature.",
"Comparing to chit-chat dialogue models that are usually trained offline using single-turn context-response pairs, task-oriented dialogue model involves reasoning and planning over multiple dialogue turns.",
"This makes it especially important for a system to be able to learn from users in an interactive manner.",
"Comparing to SL models, systems trained with RL by receiving feedback during users interactions showed improved model robustness against diverse dialogue scenarios (Williams and Zweig, 2016; Liu and Lane, 2017b).",
"A critical step in learning RL based task-oriented dialogue models is dialogue policy learning.",
"Training dialogue policy online from scratch typically requires a large number of interactive learning sessions before an agent can reach a satisfactory performance level.",
"Recent works (Hender-son et al., 2008; Williams et al., 2017; Liu et al., 2017) explored pre-training the dialogue model using human-human or human-machine dialogue 2060 corpora before performing interactive learning with RL to address this concern.",
"A potential drawback with such pre-training approach is that the model may suffer from the mismatch of dialogue state distributions between supervised training and interactive learning stages.",
"While interacting with users, the agent's response at each turn has a direct influence on the distribution of dialogue state that the agent will operate on in the upcoming dialogue turns.",
"If the agent makes a small mistake and reaches an unfamiliar state, it may not know how to recover from it and get back to a normal dialogue trajectory.",
"This is because such recovery situation may be rare for good human agents and thus are not well covered in the supervised training corpus.",
"This will result in compounding errors in a dialogue which may lead to failure of a task.",
"RL exploration might finally help to find corresponding actions to recover from a bad state, but the search process can be very inefficient.",
"To ameliorate the effect of dialogue state distribution mismatch between offline training and RL interactive learning, we propose a hybrid imitation and reinforcement learning method.",
"We first let the agent to interact with users using its own policy learned from supervised pre-training.",
"When an agent makes a mistake, we ask users to correct the mistake by demonstrating the agent the right actions to take at each turn.",
"This user corrected dialogue sample, which is guided by the agent's own policy, is then added to the existing training corpus.",
"We fine-tune the dialogue policy with this dialogue sample aggregation (Ross et al., 2011) and continue such user teaching process for a number of cycles.",
"Since asking for user teaching at each dialogue turn is costly, we want to reduce this user teaching cycles as much as possible and continue the learning process with RL by collecting simple forms of user feedback (e.g. a binary feedback, positive or negative) only at the end of a dialogue.",
"Our main contributions in this work are: We design a neural network based task-oriented dialogue system which can be optimized end-to-end for natural language understanding, dialogue state tracking, and dialogue policy learning.",
"We propose a hybrid imitation and reinforcement learning method for end-to-end model training in addressing the challenge with dialogue state distribution mismatch between offline training and interactive learning.",
"The remainder of the paper is organized as follows.",
"In section 2, we discuss related work in building end-to-end task-oriented dialogue systems.",
"In section 3, we describe the proposed model and learning method in detail.",
"In Section 4, we describe the experiment setup and discuss the results.",
"Section 5 gives the conclusions.",
"Popular approaches in learning task-oriented dialogue include modeling the task as a partially observable Markov Decision Process (POMDP) (Young et al., 2013).",
"RL can be applied in the POMDP framework to learn dialogue policy online by interacting with users (Gasic et al., 2013).",
"The dialogue state and system action space have to be carefully designed in order to make the policy learning tractable (Young et al., 2013), which limits the model's usage to restricted domains.",
"Recent efforts have been made in designing end-to-end solutions for task-oriented dialogues, inspired by the success of encoder-decoder based neural network models in non-task-oriented conversational systems (Serban et al., 2015; Li et al., 2016).",
"Wen et al. (Wen et al., 2017) designed an end-to-end trainable neural dialogue model with modularly connected system components.",
"This system is a supervised learning model which is evaluated on fixed dialogue corpora.",
"It is unknown how well the model performance generalizes to unseen dialogue state during user interactions.",
"Our system is trained by a combination of supervised and deep RL methods, as it is shown that RL may effectively improve dialogue success rate by exploring a large dialogue action space (Henderson et al., 2008; Li et al., 2017).",
"Bordes and Weston (2017) proposed a task-oriented dialogue model using end-to-end memory networks.",
"In the same line of research, people explored using query-regression networks (Seo et al., 2016), gated memory networks (Liu and Perez, 2017), and copy-augmented networks (Eric and Manning, 2017) to learn the dialogue state.",
"These systems directly select a final response from a list of response candidates conditioning on the dialogue history without doing slot filling or user goal tracking.",
"Our model, on the other hand, explicitly tracks user's goal for effective integration with knowledge bases (KBs).",
"Robust dialogue state tracking has been shown (Jurccek et al., 2012) to 2061 be critical in improving dialogue success in task completion.",
"Dhingra et al. (2017) proposed an end-to-end RL dialogue agent for information access.",
"Their model focuses on bringing differentiability to the KB query operation by introducing a soft retrieval process in selecting the KB entries.",
"Such soft-KB lookup is prone to entity updates and additions in the KB, which is common in real world information systems.",
"In our model, we use symbolic queries and leave the selection of KB entities to external services (e.g. a recommender sys-tem), as entity ranking in real world systems can be made with much richer features (e.g. user pro-files, location and time context, etc.).",
"Quality of the generated symbolic query is directly related to the belief tracking performance.",
"In our proposed end-to-end system, belief tracking can be optimized together with other system components (e.g. language understanding and policy) during interactive learning with users.",
"Williams et al. (2017) proposed a hybrid code network for task-oriented dialogue that can be trained with supervised and reinforcement learning.",
"They show that RL performed with a supervised pre-training model using labeled dialogues improves learning speed dramatically.",
"They did not discuss the potential issue of dialogue state distribution mismatch between supervised pretraining and RL interactive learning, which is addressed in our dialogue learning framework.",
"Figure 1 shows the overall system architecture of the proposed end-to-end task-oriented dialogue model.",
"We use a hierarchical LSTM neural network to encode a dialogue with a sequence of turns.",
"User input to the system in natural language format is encoded to a continuous vector via a bidirectional LSTM utterance encoder.",
"This user utterance encoding, together with the encoding of the previous system action, serves as the input to a dialogue-level LSTM.",
"State of this dialogue-level LSTM maintains a continuous representation of the dialogue state.",
"Based on this state, the model generates a probability distribution over candidate values for each of the tracked goal slots.",
"A query command can then be formulated with the state tracking outputs and issued to a knowledge base to retrieve requested information.",
"Finally, the system produces a dialogue action, which is conditioned on information from the dialogue state, the estimated user's goal, and the encoding of the query results .",
"This dialogue action, together with the user goal tracking results and the query results, is used to generate the final natural language system response via a natural language generator (NLG).",
"We describe each core model component in detail in the following sections.",
"We use a bidirectional LSTM to encode the user utterance to a continuous representation.",
"We refer to this LSTM as the utterance-level LSTM.",
"The user utterance vector is generated by concatenating the last forward and backward LSTM states.",
"Let U k = ( w 1 , w 2 , ..., w T k ) be the user utterance at turn k with T k words.",
"These words are firstly mapped to an embedding space, and further serve as the step inputs to the bidirectional LSTM.",
"Let h t and h t represent the forward and backward LSTM state outputs at time step t .",
"The user utterance vector U k is produced by: U k = [ h T k , h 1 ] , where h T k and h 1 are the last states in the forward and backward LSTMs.",
"Dialogue state tracking, or belief tracking, maintains the state of a conversation, such as user's goals, by accumulating evidence along the sequence of dialogue turns.",
"Our model maintains the dialogue state in a continuous form in the dialogue-level LSTM ( LSTMD ) state s k .",
"s k is updated after the model processes each dialogue turn by taking in the encoding of user utterance U k and the encoding of the previous turn system output A k 1 .",
"This dialogue state serves as the input to the dialogue state tracker.",
"The tracker updates its estimation of the user's goal represented by a list of slot-value pairs.",
"A probability distribution P ( l m k ) is maintained over candidate values for each goal slot type m M : s k = LSTMD ( s k 1 , [ U k , A k 1 ]) (1) P ( l mk | U k , A <k ) = SlotDist m ( s k ) (2) where SlotDist m is a single hidden layer MLP with softmax activation over slot type m M .",
"The dialogue state tracking outputs are used to form an API call command to retrieve information from a knowledge base.",
"The API call command is 2062 User : Movie for the day after tomorrow, please System : Ok, what time do you prefer?",
"produced by replacing the tokens in a query command template with the best hypothesis for each goal slot from the dialogue state tracking output.",
"Alternatively, an n-best list of API calls can be generated with the most probable candidate values for the tracked goal slots.",
"In interfacing with KBs, instead of using a soft KB lookup as in (Dhingra et al., 2017), our model sends symbolic queries to the KB and leaves the ranking of the KB entities to an external recommender system.",
"Entity ranking in real world systems can be made with much richer features (e.g. user profiles, local context, etc.) in the back-end system other than just following entity posterior probabilities conditioning on a user utterance.",
"Hence ranking of the KB entities is not a part of our proposed neural dialogue model.",
"In this work, we assume that the model receives a ranked list of KB entities according to the issued query and other available sources, such as user models.",
"Once the KB query results are returned, we save the retrieved entities to a queue and encode the result summary to a vector.",
"Rather then encoding the real KB entity values as in (Bordes and Weston, 2017; Eric and Manning, 2017), we only encode a summary of the query results (i.e. item availability and number of matched items).",
"This encoding serves as a part of the input to the policy network.",
"A dialogue policy selects the next system action in response to the user's input based on the current dialogue state.",
"We use a deep neural network to model the dialogue policy.",
"There are three inputs to the policy network, (1) the dialogue-level LSTM state s k , (2) the log probabilities of candidate values from the belief tracker v k , and (3) the LSTM Dialogue State, System action at turn k Policy Network Query results encoding Slot value logits Figure 2: Dialogue state and policy network.",
"encoding of the query results summary E k .",
"The policy network emits a system action in the form of a dialogue act conditioning on these inputs: P ( a k | U k , A <k , E k ) = PolicyNet( s k , v k , E k ) (3) where v k represents the concatenated log probabilities of candidate values for each goal slot, E k is the encoding of query results, and PolicyNet is a single hidden layer MLP with softmax activation function over all system actions.",
"The emitted system action is finally used to produce a system response in natural language format by combining the state tracker outputs and the retrieved KB entities.",
"We use a template based NLG in this work.",
"The delexicalised tokens in the NLG template are replaced by the values from either the estimated user goal values or the KB entities, depending on the emitted system action.",
"By connecting all the system components, we have an end-to-end model for task-oriented dialogue.",
"Each system component is a neural network that takes in underlying system component's outputs 2063 in a continuous form that is fully differentiable, and the entire system (utterance encoding, dialogue state tracking, and policy network) can be trained end-to-end.",
"We first train the system in a supervised manner by fitting task-oriented dialogue samples.",
"The model predicts the true user goal slot values and the next system action at each turn of a dialogue.",
"We optimize the model parameter set by minimizing a linear interpolation of cross-entropy losses for dialogue state tracking and system action prediction: min KX k =1 h MX m =1 l m log P ( l mk | U k , A <k , E <k ; ) + a log P ( a k | U k , A <k , E k ; ) i (4) where s are the linear interpolation weights for the cost of each system output.",
"l mk is the ground truth label for the tracked user goal slot type m M at the k th turn, and a k is the true system action in the corpus.",
"Teaching Once obtaining a supervised training dialogue agent, we further let the agent to learn interactively from users by conducting task-oriented dialogues.",
"Supervised learning succeeds when training and test data distributions match.",
"During the agent's interaction with users, any mistake made by the agent or any deviation in the user's behavior may lead to a different dialogue state distribution than the one that the supervised learning agent saw during offline training.",
"A small mistake made by the agent due to this covariate shift (Ross and Bagnell, 2010; Ross et al., 2011) may lead to compounding errors which finally lead to failure of a task.",
"To address this issue, we propose a dialogue imitation learning method which allows the dialogue agent to learn from human teaching.",
"We let the supervised training agent to interact with users using its learned dialogue policy ( a | s ) .",
"With this, we collect additional dialogue samples that are guided by the agent's own policy, rather than by the expert policy as those in the supervised training corpora.",
"When the agent make mistakes, we ask users to correct the mistakes and demonstrate the expected actions and predictions for the agent to make.",
"Such user teaching precisely addresses Algorithm 1 Dialogue Learning with Human Teaching and Feedback 1: Train model end-to-end on dialogue samples D with MLE and obtain policy ( a | s ) .",
"eq 4 2: for learning iteration k = 1 : K do 3: Run ( a | s ) with user to collect new dialogue samples D 4: Ask user to correct the mistakes in the tracked user's goal for each dialogue turn in D 5: Add the newly labeled dialogue samples to the existing corpora: D D D 6: Train model end-to-end on D and obtain an updated policy ( a | s ) .",
"eq 4 7: end for 8: for learning iteration k = 1 : N do 9: Run ( a | s ) with user for a new dialogue 10: Collect user feedback as reward r 11: Update model end-to-end and obtain an updated policy ( a | s ) .",
"eq 5 12: end for the limitations of the currently learned dialogue model, as these newly collected dialogue samples are driven by the agent's own policy.",
"Specifically, in this study we let an expert user to correct the mistake made by the agent in tracking the user's goal at the end of each dialogue turn.",
"This new batch of annotated dialogues are then added to the existing training corpus.",
"We start the next round of supervised model training on this aggregated corpus to obtain an updated dialogue policy, and continue this dialogue imitation learning cycles.",
"Learning from human teaching can be costly, as it requires expert users to provide corrections at each dialogue turn.",
"We want to minimize the number of such imitation dialogue learning cycles and continue to improve the agent via a form of supervision signal that is easier to obtain.",
"After the imitation learning stage, we further optimize the neural dialogue system with RL by letting the agent to interact with users and learn from user feedback.",
"Different from the turn-level corrections in the imitation dialogue learning stage, the feedback is only collected at the end of a dialogue.",
"A positive reward is collected for successful tasks, and a zero reward is collected for failed tasks.",
"A step penalty is applied to each dialogue turn to encour-2064 age the agent to complete the task in fewer steps.",
"In this work, we only use task-completion as the metric in designing the dialogue reward.",
"One can extend it by introducing additional factors to the reward functions, such as naturalness of interactions or costs associated with KB queries.",
"To encourage the agent to explore the dialogue action space, we let the agent to follow a softmax policy during RL training by sampling system actions from the policy network outputs.",
"We apply REINFORCE algorithm (Williams, 1992) in optimizing the network parameters.",
"The objective function can be written as J k ( ) = E [ R k ] = E hP K k t =0 t r k + t i , with [0 , 1) being the discount factor. With likelihood ratio gradient estimator, the gradient of the objective function can be derived as: J k ( ) = E [ R k ] = X a k ( a k | s k ) log ( a k | s k ) R k = E [ log ( a k | s k ) R k ] (5) This last expression above gives us an unbiased gradient estimator.",
"We evaluate the proposed method on DSTC2 (Henderson et al., 2014a) dataset in restaurant search domain and an internally collected dialogue corpus 1 in movie booking domain.",
"The movie booking dialogue corpus has an average number of 8.4 turns per dialogue.",
"Its training set has 100K dialogues, and the development set and test set each has 10K dialogues.",
"The movie booking dialogue corpus is generated (Shah et al., 2018) using a finite state machine based dialogue agent and an agenda based user simulator (Schatzmann et al., 2007) with natural language utterances rewritten by real users.",
"The user simulator can be configured with different personalities, showing various levels of randomness and cooperativeness.",
"This user simulator is also used to interact with our end-to-end training agent during imitation and reinforcement learning stages.",
"We randomly select a user profile 1 The dataset can be accessed via https: //github.com/google-research-datasets/simulated-dialogue when conducting each dialogue simulation.",
"During model evaluation, we use an extended set of natural language surface forms over the ones used during training time to evaluate the generalization capability of the proposed end-to-end model in handling diverse natural language inputs.",
"The size of the dialogue-level and utterance-level LSTM state is set as 200 and 150 respectively.",
"Word embedding size is 300.",
"Embedding size for system action and slot values is set as 32.",
"Hidden layer size of the policy network is set as 100.",
"We use Adam optimization method (Kingma and Ba, 2014) with initial learning rate of 1e-3.",
"Dropout rate of 0.5 is applied during supervised training to prevent the model from over-fitting.",
"In imitation learning, we perform mini-batch model update after collecting every 25 dialogues.",
"System actions are sampled from the learned policy to encourage exploration.",
"The system action is defined with the act and slot types from a dialogue act (Henderson et al., 2013).",
"For example, the dialogue act confirm ( date = monday ) is mapped to a system action confirm date and a candidate value monday for slot type date .",
"The slot types and values are from the dialogue state tracking output.",
"In RL optimization, we update the model with every mini-batch of 25 samples.",
"Dialogue is considered successful based on two conditions: (1) the goal slot values estimated from dialogue state tracking fully match to the user's true goal values, and (2) the system is able to confirm with the user the tracked goal values and offer an entity which is finally accepted by the user.",
"Maximum allowed number of dialogue turn is set as 15.",
"A positive reward of +15.0 is given at the end of a successful dialogue, and a zero reward is given to a failed case.",
"We apply a step penalty of -1.0 for each turn to encourage shorter dialogue for task completion.",
"Table 4.3 and Table 4.3 show the supervised learning model performance on DSTC2 and the movie booking corpus.",
"Evaluation is made on DST accuracy.",
"For the evaluation on DSTC2 corpus, we use the live ASR transcriptions as the user input utterances.",
"Our proposed model achieves near state-of-the-art dialogue state tracking results on DSTC2 corpus, on both individual slot tracking and joint slot tracking, comparing to the recent published 2065 results using RNN (Henderson et al., 2014b) and neural belief tracker (NBT) (Mrksic et al., 2016).",
"In the movie booking domain, our model also achieves promising performance on both individual slot tracking and joint slot tracking accuracy.",
"Instead of using ASR hypothesis as model input as in DSTC2, here we use text based input which has much lower noise level in the evaluation of the movie booking tasks.",
"This partially explains the higher DST accuracy in the movie booking domain comparing to DSTC2.",
"Evaluations of interactive learning with imitation and reinforcement learning are made on metrics of (1) task success rate, (2) dialogue turn size, and (3) DST accuracy.",
"Figures 3, 4, and 5 show the learning curves for the three evaluation metrics.",
"In addition, we compare model performance on task success rate using two different RL training settings, the end-to-end training and the policy-only training, to show the advantages of performing end-to-end system optimization with RL.",
"Task Success Rate As shown in the learning curves in Figure 3, the SL model performs poorly.",
"This might largely due to the compounding errors caused by the mismatch of dialogue state distribution between offline training and interactive learning.",
"We use an extended set of user NLG templates during interactive evaluation.",
"Many of the test NLG templates are not seen by the supervised training agent.",
"Any mistake made by the agent in understanding the user's request may lead to compounding errors in the following dialogue Figure 3: Interactive learning curves on task success rate.",
"turns, which cause final task failure.",
"The red curve ( SL + RL ) shows the performance of the model that has RL applied on the supervised pre-training model.",
"We can see that interactive learning with RL using a weak form of supervision from user feedback continuously improves the task success rate with the growing number of user interactions.",
"We further conduct experiments in learning dialogue model from scratch using only RL (i.e. without supervised pre-training), and the task success rate remains at a very low level after 10K dialogue simulations.",
"We believe that it is because the dialogue state space is too complex for the agent to learn from scratch, as it has to learn a good NLU model in combination with a good policy to complete the task.",
"The yellow curve ( SL + IL 500 + RL ) shows the performance of the model that has 500 episodes of imitation learning over the SL model and continues with RL optimization.",
"It is clear from the results that applying imitation learning on supervised training model efficiently improves task success rate.",
"RL optimization after imitation learning increases the task success rate further.",
"The blue curve ( SL + IL 1000 + RL ) shows the performance of the model that has 1000 episodes of imitation learning over the SL model and continues with RL.",
"Similarly, it shows hints that imitation learning may effectively adapt the supervised training model to the dialogue state distribution during user interactions.",
"Average Dialogue Turn Size Figure 4 shows the curves for the average turn size of successful dialogues.",
"We observe decreasing number of dialogue turns in completing a task along the growing number of interactive learning sessions.",
"This shows that the dialogue agent learns better strategies in successfully completing the task with fewer 2066 Figure 4: Interactive learning curves on average dialogue turn size.",
"number of dialogue turns.",
"The red curve with RL applied directly after supervised pre-training model gives the lowest average number of turns at the end of the interactive learning cycles, comparing to models with imitation dialogue learning.",
"This seems to be contrary to our observation in Figure 3 that imitation learning with human teaching helps in achieving higher task success rate.",
"By looking into the generated dialogues, we find that the SL + RL model can handle easy tasks well but fails to complete more challenging tasks.",
"Such easy tasks typically can be handled with fewer number of turns, which result in the low average turn size for the SL + RL model.",
"On the other hand, the imitation plus RL models attempt to learn better strategies to handle those more challenging tasks, resulting in higher task success rates and also slightly increased dialogue length comparing to SL + RL model.",
"Dialogue State Tracking Accuracy Similar to the results on task success rate, we see that imitation learning with human teaching quickly improves dialogue state tracking accuracy in just a few hundred interactive learning sessions.",
"The joint slots tracking accuracy in the evaluation of SL model using fixed corpus is 84.57% as in Table 4.3.",
"The accuracy drops to 50.51% in the interactive evaluation with the introduction of new NLG templates.",
"Imitation learning with human teaching effectively adapts the neural dialogue model to the new user input and dialogue state distributions, improving the DST accuracy to 67.47% after only 500 imitation dialogue learning sessions.",
"Another encouraging observation is that RL on top of SL model and IL model not only improves task success rate by optimizing dialogue policy, but also Figure 5: Interactive learning curves on dialogue state tracking accuracy.",
"further improves dialogue state tracking performance.",
"This shows the benefits of performing end-to-end optimization of the neural dialogue model with RL during interactive learning.",
"End-to-End RL Optimization To further show the benefit of performing end-to-end optimization of dialogue agent, we compare models with two different RL training settings, the end-to-end training and the policy-only training.",
"End-to-end RL training is what we applied in previous evaluation sections, in which the gradient propagates from system action output layer all the way back to the natural language user input layer.",
"Policy-only training refers to only updating the policy network parameters during interactive learning with RL, with all the other underlying system parameters fixed.",
"The evaluation results are shown in Figure 6.",
"From these learning curves, we see clear advantage of performing end-to-end model update in achieving higher dialogue task success rate during interactive learning comparing to only updating the policy network.",
"We further evaluate the proposed method with human judges recruited via Amazon Mechanical Turk.",
"Each judge is asked to read a dialogue between our model and user simulator and rate each system turn on a scale of 1 (frustrating) to 5 (opti-mal way to help the user).",
"Each turn is rated by 3 different judges.",
"We collect and rate 100 dialogues for each of the three models:",
"(i) SL model,",
"(ii) SL model followed by 1000 episodes of IL,",
"(iii) SL and IL followed by RL.",
"Table 3 lists the mean and standard deviation of human scores overall system turns.",
"Performing interactive learning with imitation and reinforcement learning clearly improves the quality of the model according to human judges.",
"In this work, we focus on training task-oriented dialogue systems through user interactions, where the agent improves through communicating with users and learning from the mistake it makes.",
"We propose a hybrid learning approach for such systems using end-to-end trainable neural network model.",
"We present a hybrid imitation and reinforcement learning method, where we firstly train a dialogue agent in a supervised manner by learning from dialogue corpora, and continuously to improve it by learning from user teaching and feedback with imitation and reinforcement learning.",
"We evaluate the proposed learning method with both offline evaluation on fixed dialogue corpora and interactive evaluation with users.",
"Experimental results show that the proposed neural dialogue agent can effectively learn from user teaching and improve task success rate with imitation learning.",
"Applying reinforcement learning with user feedback after imitation learning with user teaching improves the model performance further, not only on the dialogue policy but also on the dialogue state tracking in the end-to-end training framework."
] | [
"method",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"method",
"objective",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"objective",
"abstain",
"abstain"
] |
[
"Predicting the answer to a product-related question is an emerging field of research that recently attracted a lot of attention.",
"Answering subjective and opinion-based questions is most challenging due to the dependency on customer-generated content.",
"Previous works mostly focused on review-aware answer prediction; however, these approaches fail for new or unpopular products, having no (or only a few) reviews at hand.",
"In this work, we propose a novel and complementary approach for predicting the answer for such questions, based on the answers for similar questions asked on similar products.",
"We measure the contextual similarity between products based on the answers they provide for the same question.",
"A mixture-of-expert framework is used to predict the answer by aggregating the answers from contextually similar products.",
"Empirical results demonstrate that our model outperforms strong baselines on some segments of questions, namely those that have roughly ten or more similar resolved questions in the corpus.",
"We additionally publish two large-scale datasets 1 used in this work, one is of similar product question pairs, and the second is of product question-answer pairs.",
"Product-related Question Answering (PQA) is a popular and essential service provided by many e-commerce websites, letting consumers ask product related questions to be answered by other consumers based on their experience.",
"The large archive of accumulated resolved questions can be further utilized by customers to support their purchase journey and automatic product question answering tools (e.g. Jeon et al. (2005); Cui et al. (2017); Work carried out during an internship at Amazon.",
"Carmel et al. (2018)).",
"However, there are many unanswered questions on these websites, either because a newly issued question has not attracted the community attention yet, or because of many other reasons (Park et al., 2015).",
"This may frustrate e-commerce users, in particular when their purchase decision depends on the question's answer.",
"Automatic PQA may assist the customers and the sellers by answering these unanswered questions, based on various diversified resources.",
"Previous PQA approaches leverage product spec-ifications and description information (Cui et al., 2017; Lai et al., 2018; Gao et al., 2019), as well as customer-reviews (Yu et al., 2012; McAuley and Yang, 2016; Yu and Lam, 2018; Das et al., 2019; Fan et al., 2019; Chen et al., 2019; Deng et al., 2020), for answering product related questions.",
"However, there are two notable shortcomings to these two approaches.",
"Product information can typically address questions about product features and functionality, but can't address complex and subjective questions such as opinion question ( Is it good for a 10 year old? ), advice-seeking question ( What is the color that best fit my pink dress? ), or unique usage questions ( Can I play Fifa 2018 on this laptop? ).",
"Customer-reviews, on the other hand, can partially address this kind of questions (Wan and McAuley, 2016), yet there are many products with few or no reviews available, either because they are new on the site or are less popular.",
"We propose a novel and complementary approach for answering product-related questions based on a large corpus of PQA.",
"Given an unanswered product question, we seek similar resolved questions 2 about similar products and leverage their existing answers to predict the answer for the cus-tomer's question.",
"We call our method SimBA 2 We consider questions similar if they have the same semantic intent.",
"For example, can I wash this?",
", Is the product washable?",
", Is it ok to clean it with water?",
"are all considered as similar questions when asked in context of a similar product.",
"( Sim ilarity B ased A nswer Prediction).",
"For example, the answer for the question Will these jeans shrink after a wash? , asked about a new pair of jeans on the website, may be predicted based on the answers for similar questions asked about other jeans that share properties such as fabric material, brand, or style.",
"An example is shown in Table 1.",
"The main hypothesis we explore in this work is whether the answer to a product question can be predicted, based on the answers for similar questions about similar products, and how reliable this prediction is.",
"As our method relies on the existing PQA corpus, it addresses the two mentioned shortcomings of the previous approaches.",
"First, it can address a variety of product-related questions that are common in PQA, including subjective and usage questions.",
"Second, our method can provide answers to new or less popular products as it leverages an existing set of similar questions from other similar products.",
"A key element of our proposed method is a novel concept that we refer to as Contextual Product Similarity, which determines whether two products are similar in the context of a specific question.",
"For example, two smart-watches may be similar with regards to their texting capability but different with regards to sleep monitoring.",
"In Section 3 we formally define this concept and propose a prediction model for measuring contextual similarity between products, with respect to a given question.",
"Additionally, we describe an efficient method to train this model by leveraging an existing PQA corpus.",
"Another appealing property of SimBA is its ability to support the predicted answer by providing the list of highly similar questions upon which the answer was predicted, hence increasing users' confidence and enhancing user engagement.",
"Our main contributions are:",
"(a) A novel PQA method that overcomes several shortcomings of previous methods.",
"(b) A novel concept of Contextual Product Similarity and an effective way to automatically collect annotations to train this model.",
"(c) Finally, publishing two large scale datasets, one is a question similarity data set and the second is a large-scale Amazon product questions and answers dataset, details are provided in Section 4.",
"Empirical evaluation of our method demonstrates that it outperforms a strong baseline in some question segments, and that a hybrid model is effective in all the vast majority of the questions.",
"Automatic aswering product related questions has become a permanent service provided by many e-commerce websites and services (Cui et al., 2017; Carmel et al., 2018).",
"Questions are typically answered based on product details from the catalog, existing Q&A's on the site, and customer reviews.",
"Each of these resources, used for answer generation, has been studied extensively by the research community recently, probably due to the complexity of this task, the availability of appropriate datasets (McAuley, 2016), and the emergent increase in on-line shopping usage.",
"Lai et al. (2018) built a question answering system based on product facts and specifications.",
"They trained a question answering system by transfer learning from a large-scale Amazon dataset to the Home Depot domain.",
"Gao et al. (2019) generated an answer from product attributes and reviews using adversarial learning model which is composed of three components: a question-aware review representation module, a key-value attribute graph, and a seq2seq model for answer generation.",
"Yu et al. (2012) answered opinion questions by exploiting hierarchical organization of consumer reviews, where reviews were organized according to the product aspects.",
"The publication of Amazon datasets of reviews 3 and Q&As (McAuley, 2016), triggered a flood of studies on review-aware answer prediction and generation.",
"McAuley and Yang (2016) formulated the review based question answering task as a mixture-of-experts framework each review is an expert 3 https://nijianmo.github.io/amazon/index.html that votes on the answer to a yes/no question.",
"Their model learns to identify relevant' reviews based on those that vote correctly.",
"In a following work, Wan and McAuley (2016) observed that questions have multiple, often divergent, answers, and the full spectrum of answers should be further utilized to train the answering system.",
"Chen et al. (2019) described a multi-task attention mechanism which exploits large amounts of Q&As, and a few manually labeled reviews, for answer prediction.",
"Fan et al. (2019) proposed a neural architecture, directly fed by the raw text of the question and reviews, to mark review segment as the final answer, in a reading comprehension fashion.",
"Das et al. (2019) learned an adversarial network for inferring reviews which best answer a question, or augment a given answer.",
"Deng et al. (2020) incorporated opinion mining into the review-based answer generation.",
"Yu and Lam (2018) generated aspect-specific representation for questions and reviews for answer prediction for yes-no questions.",
"Yu et al. (2018) used transfer learning from a resource-rich source domain to a resource-poor target domain, by simultaneously learning shared representations of questions and reviews in a uni-fied framework of both domains.",
"All this line of works assume the existence of rich set of product reviews to be used for question answering.",
"This solution fails when no reviews are available.",
"The challenge of review generation for a given product, while utilizing similar products' reviews, was addressed by Park et al. (2015).",
"For a given product they extracted useful sentences from the reviews of other similar products.",
"Similarly, (Pourgholamali, 2016) mined relevant content for a product from various content resources available for similar products.",
"Both works focused on the extraction of general useful product related information rather than answering a specific product question, as in our case.",
"Second, the product-similarity methods they considered rely on product specifi-cations and description, and do not depend on the question to be answered, while our method considers a specific question at hand when estimating contextual product similarity.",
"In this section, we introduce the Similarity-Based Answer-prediction (SimBA) method for predicting the answer for a product question, based on the answers for other similar product questions.",
"We Figure 1: Overview of SimBA answer prediction framework.",
"restrict our study to yes/no questions only, due to their popularity in the PQA domain (54% on our PQA dataset), and following common practices in answer prediction studies (McAuley and Yang, 2016; Yu and Lam, 2018).",
"Figure 1 presents our prediction framework and its main components.",
"Formally, a question-product-answer tuple is denoted by r j = ( q j , p j , a j ) , where a j { (cid:48) yes (cid:48) , (cid:48) no (cid:48) } .",
"C = { r j } Nj =1 is the set of N tuples of a given product category.",
"r t = ( q t , p t , ?) 4 is the target record of an unanswered question q t , asked about product p t .",
"We treat C as the knowledge-base we use for answering q t .",
"Given a target record r t , in order to predict its answer a t , we first retrieve a set of records from C with the most similar questions to q t (Figure 1, stage 1).",
"We denote the retrieved records as siblings of r t .",
"We then filter the siblings by applying a Question-to-Question similarity (Q2Q) model, keeping only records with highly similar questions which are expected to have the same question intent as of q t , (Figure 1, stage 2).",
"We denote these records as twins of r t .",
"We then apply our Contextual Product Similarity (CPS) model to measure the contextual similarity between r t and its twins (Figure 1, stage 3).",
"The CPS similarity score is used to weight the twins by considering them as voters, applying a mixture-of-experts model over their answers for the final answer prediction (Fig-ure 1, stage 4).",
"More details about the model's components, the training processes, and other spec-ifications, are described in the following.",
"Given a target record r t , and a corpus of product-question-answer records C , our first goal is to re-4",
"trieve all records with a question having the same intent as of q t .",
"As C might be very large, applying a complex neural model to measure the similarity of each question in C to q t is often infeasible.",
"We therefore apply a two step retrieval process.",
"In a preliminary offline step, we index the records in C by creating embedding vectors for their questions, using a pre-trained encoder.",
"For retrieval, done both during training and inference, we similarly embed the question q t into vector e t .",
"We then use a fast Approximate K Nearest Neighbors (AKNN) search to retrieve K records, with the most similar questions, based on the cosine similarity between e t and the embedding vectors of the questions in C .",
"We denote the set of retrieved siblings of r t by S ( r t ) .",
"The retrieved sibling records are those with the most similar questions to the target question.",
"In the second step of the retrieval process, we enhance our record selection by applying a highly accurate transformer-based Question-to-Question (Q2Q) classifier (See Section 5.1), which we train over our question to question similarity dataset (Section 4.1).",
"The Q 2 Q ( q t , q k ) classifier predicts the similarity between a target question q t and each of the questions q k in S ( r t ) .",
"A record r k is considered a twin of r t if Q 2 Q ( q t , q k ) > , where 0 .",
"5 1 .",
"0 is a hyper-parameter of the system.",
"We denote the set of twins of r t by T ( r t ) .",
"We consider products p 1 and p 2 to be contextually similar, with respect to a yes/no question q , if the answer to q on both products is the same 5 .",
"Given a pair of twin records ( r 1 , r 2 ) , our CPS model is aims to predict the contextual similarity between them, i.e. whether their (highly similar) questions have the same answer.",
"Since r 1 and r 2 are twins, their questions are expected to have the same intent; yet, they might be phrased differently.",
"To avoid losing any information, we provide both questions as input to the CPS model, during training and during inference time.",
"between a target record r t , and one of its twins record r j .",
"For each record, the question-product pair is embedded using a pre-trained transformer encoder, allowing the product textual content and the question text attend each other 6 : H t = Encoder ( q t , p t ) , H j = Encoder ( q j , p j ) The two models share weights to avoid over-fitting and for more efficient learning.",
"A second encoder embeds the textual content of both products, encapsulating the similarity between them: H tj = Encoder ( p t , p j ) Then, a one hidden MLP layer takes the concatenation of the three embedding vectors, to predict the probability of a t = a j , tj = CP S ( r t , r j ) = P ( a t = a j | r t , r j ) = MLP ( H t H j H tj ) (1) Another key advantage of the CPS model is its ability to be trained on a large scale, without human annotations, by simply yielding the training labels directly from the polarity between the answers of twin pairs extracted from our training data.",
"For any pair of twins ( r i , r j ) : label ( r i , r j ) = (cid:40) similar , a i = a j different , a i (cid:54) = a j (2) 6 The product textual content can be accumulated from several resources.",
"A mixture of experts is a widely-used method to combine the outputs of several classifiers by associating a weighted confidence score with each classifier (Jacobs et al., 1991).",
"In our setting, experts are individual twins that lend support for or against a particular answer for a question.",
"Each twin is weighted by its contextual similarity to the target record r t , as predicted by the CPS model.",
"its twins, r j T ( r t ) is determined by ( r j ) = max ( 2 tj , w min ) where tj = CP S ( r t , r j ) , and 0 w min 0 .",
"5 is a lower weight-limit; a hyper-parameter that we tune on the development set.",
"7 The predicted class of a t is therefore derived by P red ( a t | r t ) = sign (cid:88) r j T ( r t ) ( r j ) ( a j ) (3) where positive/negative P red indicates yes'/no' respectively, and ( a ) = (cid:26) +1 , a = yes' 1 , a = no' .",
"Our methodology can be easily expanded to incorporate more answer predictors (voters) of different types into SimBA.",
"An example for such an expansion is described at Section 5.3.",
"We introduce two new datasets to experiment with our answer prediction approach: 1) The Amazon Product Question Similarity (Amazon-PQSim) dataset which is used to train our Q2Q model; 2) The Amazon Product Question Answers (Amazon-PQA) dataset of product related Q&As, used for training the SimBA model.",
"7 We tried using the CPS raw score for all twins, i.e. w min = 0 , however, using a fine-tuned minimal weight yielded better results.",
"We collected a first-of-a-kind question-to-question similarity dataset of product-question pairs from the Amazon website (Amazon-PQSim. See Table 2 for examples).",
"Unlike the Quora dataset of general question pairs 8 , product questions are asked in the context of a designated product page.",
"This makes them unique and different from questions asked in other domains.",
"For example, the question Is it waterproof?",
", when appears on the Fitbit Flyer detailed page, should implicitly be interpreted as Is Fitbit Flyer waterproof?",
".",
"The following steps were taken for the data collection:",
"(a) randomly sampling product-questions from the Amazon website.",
"(b) filtering out some of these questions (e.g., non-English questions, for more details, see Appendix A).",
"(c) For each of the remaining questions, we retrieved up to three candidate similar questions from the collection.",
"A question is paired with the original question if the Jaccard similarity among them is in the range of [0 . 3 , 0 . 5] .",
"We ignore highly similar questions ( > 0 . 5 ) since we don't want nearly verbatim pairs in our dataset, as well as dissimilar pairs ( < 0 . 3 ).",
"(d) Finally we used the Appen crowd-sourcing platform 9 for manual annotation of question pairs similarity 10 .",
"Each question pair was labeled by at least three judges, and up to seven, until reaching agreement of 70% or more.",
"The above steps resulted in a nearly balanced dataset (1.08 positive-negative ratio) of more than 180K product question pairs with judges agreement of 70% or more, and among them about 90K question pairs have perfect judges agreement (1.14 8 https://www.kaggle.com/c/quora-question-pairs 9 https://appen.com 10 As the questions are asked in context of a specific product, they are often written in an anaphoric form (e.g. Is it waterproof? ).",
"To keep our dataset general, we instructed the judges to accept such questions as if they included the actual related product name.",
"For example, the pair Is it waterproof?",
"and Is this Fitbit waterproof?",
"were labeled as similar .",
"We collected a large corpus of product questions and answers from the Amazon website, similar to the popular Amazon Q&A dataset (McAuley, 2016).",
"Since our answer prediction method directly utilizes an existing corpus of resolved questions, we aim to collect all available questions per narrow sub-category instead of a sample of questions across broad categories by the popular Amazon Q&A dataset.",
"For example, instead of sampling from the broad Electronics category, we collect all questions under the narrower Monitors and Receivers categories.",
"Raw Data Extraction We collected all product questions, with their answers, from 100 subcategories, available on the Amazon website in August 2020.",
"Overall, 10M questions were collected, with 20.7M answers, on 1.5M products.",
"For full statistics of the raw data, see Table 7 in Appendix A. Yes/No Question Classification We followed (He and Dai, 2011) for detecting Yes/No questions using simple heuristics.",
"See Appendix A for details.",
"Yes/No Answer Labeling Questions are typically answered by free-text answers, posted independently by multiple users.",
"In order to convert these answers into a single yes/no answer, we first classified each answer into one of three classes: yes , no and maybe , and then used majority vote among the classified answers.",
"We used a pre-trained RoBERTa-based classifier, and trained the model on McAuley's dataset (McAuley, 2016), taking only yes/no questions.",
"See Appendix A for details.",
"We experiment with eleven product categories covered by our Amazon-PQA dataset (Section 4.2), training a SimBA answer prediction model for each of the categories independently.",
"Next, we describe the data preparation steps for each of the SimBA components.",
"For each record r C ( C is the category dataset), we use AKNN to retrieve the topK similar siblings from C , while",
"making sure that neither of them share the same product with r .",
"We collect training example pairs by coupling each record r with each of its siblings: D (cid:48) ( C ) = (cid:83) r i C { ( r i , r j ) | r j S ( r i ) } .",
"For retrieval we use Universal Sentence Encoder (USE) (Cer et al., 2018) to embed each question q i into a 512-length vector e i .",
"We use the Annoy 11 python library for the implementation of efficient AKNN retrieval.",
"In all experiments, for each record we retrieve the top-K ( K = 500) similar records, based on the cosine-similarity between the embedding vectors.",
"Twin Detection Using the Q2Q Model For each sibling pair ( r i , r j ) D (cid:48) ( C ) , we use our Q2Q model to score their question-similarity and keep only those with Q 2 Q ( q i , q j ) > to yield a collection of twin pairs, D ( C ) .",
"We use = 0 .",
"9 to ensure only highly similar question pairs.",
"For our Q2Q model, we apply a standard pre-trained RoBERTa (Liu et al., 2019) classifier.",
"Specifically, we use Hugging-Face base-uncased pre-trained model 12 and fine-tune 13 it for the classification task on our Q2Q dataset 14 , while splitting the data into train, dev and test sets with 80%-10%-10% partition, respectively.",
"For = 0 .",
"5 (its minimal value) the model achieves test accuracy of 83.2% with a precision of 81.3% and a recall of 87.7%.",
"When setting the twin confidence level threshold to = 0 .",
"9 , the precision of the Q2Q model raises to 89.9% with a recall of 69.5%.",
"We compare the performance of the Q2Q similarity classifier with several unsupervised baselines, namely:",
"(a) Jaccard similarity,",
"(b) cosine similarity over USE embedding, and",
"(c) cosine similarity over RoBERTa 15 embedding.",
"The results are summarized in Table 3, showing that the Q2Q model significantly outperforms these baselines.",
"11 https://github.com/spotify/annoy 12 https://github.com/huggingface/transformers 13 We use batch size 32, maximum sequence length of 128, learning rate 5e-5, and 3 epochs.",
"14 We only used the examples with full agreement.",
"15 Hugging-Face sentence-transformers roberta-large-nli-stsb-mean-tokens model.",
"Training The CPS model predicts the contextual similarity between a pair of twin records.",
"In our experiments, the textual content of a product consists of the product title concatenated with the product bullet points, separated by semicolons.",
"The question text is the original query as appeared in the Amazon PQA-dataset.",
"For the encoding modules of the CPS model we use a standard pre-trained RoBERTa-based model as well, while using the [ SEP ] token for separating the two inputs to each encoder.",
"For training, twin pairs are labeled according to their contextual similarity using Equation 2.",
"We train, fine-tune, and test, an independent CPS model for each category set C , using D ( C ) , D dev ( C ) , and D test ( C ) (details of the data split described in Appendix A).",
"The training set D ( C ) is created as described in Section 5.1.",
"D dev ( C ) and D test ( C ) , are created the same with one mod-ification rather than retrieving the siblings for a record from the dataset it belongs to, the siblings are retrieved from D ( C ) , for both D dev ( C ) , and D test ( C ) .",
"This represents a real-world scenario where existing products with their related questions are used as a corpus for predicting the answer to a question about a new product.",
"Each product with all related questions appear only in one of these sets.",
"Evaluation We evaluate the CPS model by measuring the accuracy of its contextual similarity prediction over D test ( C ) .",
"The accuracy per category is presented in Table 4.",
"The model achieves a relatively high accuracy with a macro average of 77.2% over all categories, presenting a significant lift of 9.7% over the majority decision baseline.",
"This is an encouraging result, considering the fact that the answers for many questions cannot be directly inferred from the product textual information.",
"We conjecture that the model is able to learn the affinity between different products, in the context of a given question, for predicting their contextual similarity.",
"For example, the two backpacks Ranvoo Laptop Backpack and Swiss Gear Bungee Backpack , were correctly classified by the CPS model as similar ( 0 . 5 ) in context of the question Will this fit under a plane seat? , and classified as different ( < 0 . 5 ) in context of the question Does it have a separate laptop sleeve? .",
"We experiment with our SimBA model and with a few baselines over the test set of all categories.",
"The first one is Majority which returns the majority answer among all records in the category.",
"Other methods are described next.",
"SimBA Given a target record r t , SimBA scores each of its twins by the CPS model and predicts the answer for q t , using Equation 3.",
"w min was fine-tuned on the combined dev set of all categories and was set to 0.38.",
"Question Similarity Only (QSO) We modify the SimBA model to ignore the CPS classification score when implementing the Mixture-of-Experts model (Eq. 3), by setting an equal weight of 1 .",
"0 to all twin votes: P red ( a t | r t ) = sign (cid:16)(cid:80) r j T ( r t ) ( a j ) (cid:17) .",
"Product Similarity Only (PSO) We modify the SimBA model by setting q t and q j to empty strings at the input of the CPS model, both during training and during inference, forcing it to rely on the products' textual content alone.",
"The twin retrieval process remains untouched.",
"Answer Prediction Classifier (APC) We experiment with a direct prediction approach that only considers the product textual content and the question for answer prediction.",
"For each category C , we fine-tune a pre-trained RoBERTa-based classifier over all records r j C , using q j and p j (separated by the [ SEP ] token) as input and ( a j ) as the training label.",
"SimBA+APC The experimental results show that different answer-prediction methods (e.g. SimBA vs APC) may be preferable for different product categories.",
"Therefore, we combine both methods, for achieving optimal results, by mixing # Twins Answer (Monitors) Does this require WiFi?",
"Mixture-of-Experts approach:",
"where t is the APC predicted answer, and ( r t ) = 1 , 2 and 3 for | T ( r t ) | 10 , 10 < | T ( r t ) | < 50 and | T ( r t ) | 50 , respectively 16 .",
"All values ( > 0 ) are fine-tuned on the development set for each category separately.",
"The values we used are detailed in Table 10 in Appendix A. 5.4 Answer Prediction Evaluation The answer prediction accuracy results of all tested predictors, macro-averaged over D test ( C ) of all categories, are presented in Figure 3.",
"We inspect the performance of the methods on different subsets of the test data, where each subset is determined by all records having at least x twins, x [0 .. 130] .",
"The horizontal axis indicates the minimal number of twins in the subset and the percentage of the data each subset represents.",
"For example, the results at x = 0 represent the entire test set, while the results at x = 10 represents the subset of questions with at least 10 twins, account for 40.2% of the test set.",
"hypothesize that \"obvious\" questions, for which the answer is the same across many products, are rarely asked hence have fewer twins.",
"In contrast, informative questions, for which the answer is varied across products, are frequently asked w.r.t. many products, hence have many twins.",
"Therefore we see a drop in accuracy of the Majority baseline as the number of twins grows.",
"The accuracy of QSO is significantly higher than the majority-vote baseline.",
"This demonstrates an interesting phenomena in the data of similar questions that tend to have the same answer over variety of products, typically of the same type.",
"A few examples are presented in Table 5.",
"The QSO method successfully detects these groups of questions and predicts the majority answer for each such group.",
"We find that PSO method generally doesn't improve over QSO.",
"This is somewhat surprising, as we expected that using product similarity information, such as brand, model, or key features, would increase the prediction accuracy.",
"This demonstrates the importance of question-context, as used in SimBA, in addition to the product information alone.",
"Moving to SimBA, we can see a large performance improvement over the QSO and PSO methods, which we attribute directly to the CPS model.",
"We also see consistent improvement in accuracy with the number of twins, likely due to the larger support the model has for predicting the answer.",
"The APC method, despite its relative simplicity, performs very well and greatly outperforms the majority-vote and the QSO and PSO baselines.",
"For the segment of questions with less than 10 twins, APC outperforms the SimBA method.",
"This segment represents roughly 60% of the questions.",
"However, for the segment of questions with 60 or more twins, which accounts for 13.6% of the questions, SimBA method consistently outperforms the inductive baseline by 1-2%.",
"When inspecting the results by category, as shown in Table 6, we can see that considering all questions with at least 1 twin, the APC method dominates in 7 out of the 11 categories, while for questions with at least 60 twins, SimBA method dominates in 6 out of the 11 categories.",
"Finally, we see that the two approaches compliment each other and can be effectively joined, as the SimBA+APC method outperforms both of them over all subsets.",
"We presented SimBA, a novel answer prediction approach in the PQA domain, which directly leverages similar questions answered with respect to other products.",
"Our empirical evaluation shows that on some segments of questions, namely those with roughly ten or more similar questions in the corpus, our method can outperform a strong inductive method that directly utilizes the question and the textual product content.",
"We further show that the two approaches are complementary and can be integrated to increase the overall answer prediction accuracy.",
"For future work, we plan to explore how SimBA can be extended and be applied beyond yes-no questions, e.g., for questions with numerical answers or open-ended questions.",
"Another interesting research direction is combining additional voters to the Mixture-of-Experts model, such as a review-aware answer predictor or a product details-based predictor.",
"Additionally, our current evaluation considered a static view of the answered product-question corpus, we plan to explore temporal aspects of our method, for example, considering questions age or ignoring answers of obsolete products that might be irrelevant.",
"Ohad Rozen would like to express his gratitude to Yanai Elazar, Vered Shwartz, and Ido Dagan for providing him valuable advice while he was conducting this research during his internship at Amazon."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"result",
"result",
"objective",
"abstain",
"objective",
"other"
] |
[
"A limitation of current neural dialog models is that they tend to suffer from a lack of specificity and informativeness in generated responses, primarily due to dependence on training data that covers a limited variety of scenarios and conveys limited knowledge.",
"One way to alleviate this issue is to extract relevant knowledge from external sources at decoding time and incorporate it into the dialog response.",
"In this paper, we propose a posthoc knowledge-injection technique where we first retrieve a diverse set of relevant knowledge snippets conditioned on both the dialog history and an initial response from an existing dialog model.",
"We construct multiple candidate responses, individually injecting each retrieved snippet into the initial response using a gradient-based decoding method, and then select the final response with an unsupervised ranking step.",
"Our experiments in goal-oriented and knowledge-grounded dialog settings demonstrate that human annotators judge the outputs from the proposed method to be more engaging and informative compared to responses from prior dialog systems.",
"We further show that knowledge-augmentation promotes success in achieving conversational goals in both experimental settings.",
"Generic responses which lack specificity have been a major issue in existing dialog models (Hosseini-Asl et al., 2020; Dinan et al., 2019a).",
"The issue in part stems from bottlenecks in dialog models due to a limited scope of scenarios and access to limited knowledge available during training.",
"On the other hand, encoding all possible world knowledge at training time is not feasible, and even undesirable in cases where knowledge sources are dynamically varying (Ghazvininejad et al., 2018; Majumder et al., 2020b; Zhao et al., 2020; Bruyn et al., 2020; Kim et al., 2020; Prabhumoye et al., 2021).",
"One possible approach is to incorporate There are plenty of museums to visit around Cambridge.",
"relevant knowledge at decoding-time.",
"For example, in Figure 1, the user is seeking options for a fun activity around Cambridge.",
"While the initial dialog response suggests watching a movie as an option, it does not provide any information behind that choice.",
"We propose and evaluate an approach for unsupervised knowledge injection into a dialog model's response at decoding time 1 not addressed in any previous work.",
"We first sample a response from the model (trained on dialog data) conditioned on the dialog context.",
"Next, we utilize the dialog context and the sampled response to query external knowledge sources.",
"Finally, the retrieved knowledge is used to construct a more informative and engaging response (Figure 1).",
"A major advantage of such post-hoc knowledge injection is its flexibility in adding newer knowledge sources especially where the success of achieving conversational goals relies upon the availability of relevant knowledge.",
"Post-hoc injection also promotes efficiency in NLP applications (Schwartz et al., 2020; Strubell et al., 2019): it mitigates the need to retrain dialog models to accommodate dynamically evolving knowledge.",
"We experiment with two types of knowledge sources: language models, which we treat as parametric knowledge bases (Petroni et al., 2019; 1 Code: https://github.com/majumderb/poki 3140 Dialog Model Dialog History Post-hoc Knowledge Initial Response x d Knowledge Sources knowledge snippets N Knowledge Fidelity for k i Dialog History forward pass for LM f luency Entailment with backward pass with constraints Candidate* Final Response x fi Knowledge Selection Constrained Decoding Relevance-Redundancy tradeo to select out of snippets B NDPPN B Dialog Model Candidate Final Responses B Rank w.r.to likelihood and linguistic diversity Final Response Ranking *for each snippet k i Figure 2: Pipeline of POKI: It first retrieves post-hoc knowledge from external sources based on dialog history and an initial response from a dialog model. Then the most relevant and diverse knowledge snippets are selected from the retrieved set. Each selected snippet is individually combined with the initial response through constrained decoding to generate a candidate final response. At last, the final response is selected via an unsupervised ranking step. Note that POKI requires no additional training. Brown et al., 2020); and user review datasets such as Yelp reviews (Hajas et al., 2014) as nonparametric knowledge sources ( 2).",
"Since it is possible to gather a large amount of related knowledge given a query, we select a relevant and diverse (estimated via information-theoretic measures) subset of knowledge snippets using an unsupervised method (3.1).",
"Then, a gradient-based inference approach is used to construct an updated response that incorporates the selected knowledge ( 3.2).",
"Note that our framework does not require retraining the existing dialog modelit only relies upon updating the model's output hidden states at decoding time for unsupervised knowledge injection.",
"We experiment with two scenarios: goal-oriented and knowledge-grounded dialog where the training data covers only a fraction of the needed knowledge.",
"Automatic evaluation reveals that our method is capable of generating highly diverse responses in both settings.",
"In some cases, the generated response shows high overlap with the original target response showing that our unsupervised method bridges the knowledge gap between available knowledge and human-written responses present in the existing dialog corpus.",
"An extensive human evaluation confirms that generated responses are indeed engaging, interesting, and human-like without any loss in fluency.",
"To pinpoint the usefulness of knowledge injection in the above settings, we design a real-time study (5.3) where users interact with our system to reach a conversational goal (e.g. planning a holiday or knowing more about the solar system).",
"We find that external knowledge enables users to achieve their goals more efficiently.",
"Additionally, we observe that the our approach of sub-selecting relevant but diverse knowledge leads to responses that promote success in achieving conversational goals.",
"Our goal is to construct a dialog response by injecting knowledge (from external textual sources) at decoding time, without having to retrain the models.",
"Consider a dialog model M from which we can sample a dialog response x d given a dialog history H .",
"We shall refer to the response x d sampled from such a model without any decoding time knowledge injection as the initial response.",
"However, as motivated earlier, samples from such a dialog model often lack detail.",
"To improve such responses, we retrieve and incorporate relevant external knowledge k into the initial response.",
"To achieve our goal, we construct a query using both dialog history H and the initial response x d , and gather a relevant knowledge candidate k from a knowledge source K .",
"The retrieved snippet can provide useful information to the end-user to achieve the conversational goal (see 5.3).",
"We explore both parametric (e.g querying a language model) and non-parametric (e.g. deterministic retrieval using word-overlap) ways to obtain post-hoc knowledge.",
"Pretrained language models (PTLM) are typically trained with a vast amount of text that spans a diverse range of domains.",
"Petroni et al. (2019); Brown et al. (2020) showed that such PTLMs can be used as a source of knowledge when queried with suitable textual prompts (e.g. Seattle is famous for ).",
"To use PTLMs in our use-case, we construct useful prompts from dialog history and the initial response.",
"We assemble simple prompts inspired from various knowledge-seeking situations in dialog (Shwartz et al., 2020) such as [KP] is famous for , Here is what I know about [KP] : , 3141 where [KP] is a key-phrase 2 extracted from dialog context.",
"We use gpt2-large as the PTLM.",
"For example, a query Here is what I know about fun things around Cambridge:\" results in There are plenty of museums to visit around Cambridge. If you love hiking, you can enjoy the trails alongside the river... \" as shown in Figure 1.",
"A complete list of prompts is provided in Appendix B. We finally rank each knowledge snippet k using the likelihood obtained from the PTLM for a concatenated input of k and dialog history and choose the most likely.",
"External knowledge in the form of a text corpus can be used as a non-parametric knowledge source available at decoding time.",
"Compared to parametric knowledge sources, such sources do not generate text as knowledge snippets, but offer the advantage of high quality and reliability of human written text.",
"We consider the dialog history and the initial response as a query to retrieve relevant knowledge instances from the corpus.",
"Next, we identify the top relevant instances in the given corpus with respect to the constructed query using cosine similarity on TF-IDF based representations (Robertson et al., 1995).",
"Effectively utilizing the retrieved knowledge snippets to construct an enriched dialog response encompasses two major challenges.",
"Firstly, it is not practical to use potentially hundreds of knowledge snippets obtained from the retrieval step for a single response generation.",
"Thus, we need to find a relevant but diverse subset of the snippets.",
"Secondly, the dialog model M is trained to condition only on the dialog context, and not on the external knowledge.",
"Hence, to leverage the knowledge snippets, we need a decoding strategy to rewrite the initial response x d such that the resulting final response x f should closely follow the knowledge snippet to be injected without a loss in the fluency and consistency.",
"Thus, our method requires no additional training and only assumes a language model trained on dialog context (i.e. M ).",
"We refer to our proposed framework (Figure",
"2) as POKI ( Po st-hoc K nowledge I njection in Generated Dialog).",
"At each turn, we obtain N knowledge snippets from both the parametric and non-parametric sources.",
"We wish to select a subset of B (out of N ) relevant but diverse knowledge snippets.",
"Thus, a high PMI score would imply a larger semantic similarity between the snippet k i and H .",
"To account for redundancy between the snippet pair k i , k j we again use the PMI score as follows: RED ij,j>i = PMI( k i , k j ) = log p ( k j | k i ) p ( k j ) .",
"The redundancy score is symmetric i.e. RED ij = RED ji as PMI is a symmetric measure.",
"We estimate probabilities (both conditional and marginal) p ( . ) in the above equations using GPT2 language model, following past work (Padmaku-mar and He, 2021).",
"The PMI measure is often considered better than other n-gram-based overlap metrics to measure the degree of association between two sentences (Kedzie et al., 2018; Padmakumar and He, 2021).",
"Semantically similar phrases occur in both sentences that can easily be ignored by overlap based metrics.",
"Selection via Determinantal Point Processes.",
"To select B knowledge snippets out of N with a relevance-redundancy trade-off, we use a subset selection process named Determinantal Point Process (DPP) (Kulesza and Taskar, 2011).",
"DPP employs a non-uniform selection that assigns low probability to subsets (here, of knowledge snippets) that are less diverse by modeling the repulsive correlation between independently occurring datapoints (see Figure 2).",
"We build an N N kernel matrix D , which is real, symmetric and positive semi-definite.",
"The diagonal entries D ii are populated by the squared relevance score of the i -th knowledge REL i and the off-diagonal entries D ij are squared redundancy scores RED ij .",
"We adjust in such a way that D always remains positive semi-definite (more details in (Wilhelm et al., 2018)).",
"To select a subset of B , a DPP assigns a probability of sampling such a subset proportional to the determinant 3142 of the submatrix DB of D , constructed using the indices of the subsetted items.",
"The DPP probability is geometrically related to the volume of the parallelepiped spanned by the selected knowledge snippets.",
"Diverse knowledge snippets tend to be orthogonal in their space hence span larger volume (Kulesza and Taskar, 2012).",
"Choosing B -size submatrix from N -size D is a combinatorial problem and can become prohibitively costly when N is very high.",
"Hence, we use a greedy method (Wilhelm et al., 2018) where we initialize the selection with the most relevant k i and subsequently select the next k j that maximizes the determinant of the resultant submatrix.",
"Upon selecting B knowledge snippets, we want to individually inject each knowledge snippet into x d to construct a candidate final response x f at",
"inference time.",
"Previous works have addressed the problem of unsupervised modification of already-generated text using gradient-based decoding (Dathathri et al., 2020; Qin et al., 2020) that employs an iterative procedure consisting of a forward and a backward pass.",
"The forward pass on the generative model (here, M ) encourages fluency of the generated text while the backward pass performs gradient ascent on certain desired constraints.",
"Note that due to the discrete nature of x d , it is not possible to directly update it via back-propagation.",
"Therefore, we maintain the sequence of hidden representations of each output token as z from the dialog model.",
"Each output token x d ( t ) is realized via p ( x d ( t ) ) softmax( W z ( t ) / ) , where is the temperature hyperparameter, W is the output embedding matrix (shared with the input), and W z ( t ) RV ( V is the size of the vocabulary).",
"Constraints.",
"Following Majumder et al. (2021a), we define a knowledge fidelity objective that encourages x f to be minimally different from the knowledge snippet k .",
"We achieve this by minimizing the cross entropy loss ( CE ) between knowledge tokens k (1) , . . . , k ( T ) as labels and W z (1) , . . . , W z ( T ) as the logits.",
"We further notice that injected knowledge can influence the generation in such a way that it contradicts with responses uttered during previous turns.",
"Hence, we also want x f to be entailed with the dialog history H .",
"We build an entailment classifier ( z, H ) that predicts the probability of x f (ideally, the hidden representation z of x f ) entailing H .",
"The classifier ( z, H ) is a bag-of-words classification layer with hidden states z from M and fine-tuned using the DNLI dataset (Welleck et al., 2019) to predict whether the current response is entailed with previous responses or not.",
"Decoding.",
"In the subsequent forward and backward passes, the hidden representation z is gradually perturbed via gradient ascent on the respective objectives.",
"During backward pass, the objective with constraints is L ( H , k ; z ) = log ( z, H ) CE( k, W z ) with hyperparameters and .",
"We use back-propagation to update z with the gradient z L ( H , k ; z ) while the parameters of M remain fixed.",
"The updated latent representations of z after the backward pass are denoted as z bw .",
"A forward pass with M is required to regularize the hidden states z toward the original dialog model objective to obtain z fw .",
"Corresponding to the t th token, the hidden states for the t + 1 th time step are computed via a weighted addition of backward and forward hidden states, i.e., z ( t +1) = z bw ( t ) + (1 ) z fw ( t ) where (0 , 1) is a hyperparameter.",
"During generation, we start by sampling the initial response x d with greedy decoding from M .",
"The hidden states z (of x d ) are iteratively updated by alternate backward and forward passes.",
"The final response is sampled as x f softmax( W z/ ) .",
"The number of iterations ( = 5 ) and the ( = 0 . 45 ) were chosen by maximizing the Z-normalized sum of dialog model perplexity and linguistic diversity (% of distinct bigrams) in a greedy hyperparameter search.",
"More details are in Appendix B. 3.3 Unsupervised Ranking of Candidate Final Responses Several previous works often over-generate and use an additional ranking step in order to select the final candidate in unsupervised text generation (Qin et al., 2020; Shwartz et al., 2020; Paranjape and Manning, 2021).",
"Similarly, here we want to rank the generated candidate final responses according to the diversity of the generated text as well as the conditional likelihood of generation given the dialog history.",
"For diversity, we measure the percentage of distinct bigrams present in the response.",
"For conditional likelihood, we use 3143 System Acc BLEU BRTSc D-2 ENTR KCopy 70.1 4.1 62.3 3.16 2.41 SimpleTOD (2020) 70.1 15.0 79.2 0.56 0.90 SimpleTOD+ (2021) 69.8 12.1 68.1 0.81 1.11 Arranger (2021) 70.2 12.3 68.5 0.93 1.15 Rewriter (2021) 70.2 12.1 69.4 1.03 1.45 POKI 71.1 13.7 74.5 3.78 2.67 w/o Entailment 69.9 10.9 67.8 3.67 2.56 w/o Kw Fidelity 70.0 12.3 71.2 0.95 1.19 Gold 100 100 100 0.78 0.86 Table 1: Automatic metrics on the test set of MultiWoZ.",
"the pre-trained GPT2 model to obtain the log probability when the dialog history, followed by the generated response, passed as a concatenated input.",
"Since these two scores can have varied scale, we perform Z-normalization on the individual scores and add them to obtain a single score for ranking.",
"The highest ranked candidate response is finally rendered to the user.",
"We experiment with two dialog scenarios: goal-oriented and knowledge grounded.",
"Both setups are knowledge intensive but the training data in such setups often contains only a fraction of the needed knowledge.",
"For the goal-oriented setting, we use the Multi-domain Wizard-of-Oz (Budzianowski et al., 2018) dataset.",
"For knowledge grounded dialog, we use the Wizard-of-Wikipedia (Dinan et al., 2019b) dataset.",
"More details are in Appendix A. Multi-domain Wizard-of-Oz (MultiWOZ) is a multi-domain dialog dataset (we use v2.0 (Hosseini-Asl et al., 2020)) consisting of goal-oriented human-human conversations.",
"The dataset spans seven domains (restaurant, train, attraction, hotel, taxi, hospital, police) and contains 10,438 dialogs with 13.68 average turns.",
"Since, we do not need any training data, we only use an evaluation set (of 7K utterances).",
"Wizard-of-Wikipedia (WoW) is a knowledge grounded dialog dataset which involves retrieving relevant knowledge from Wikipedia, reading and conditioning on it, and finally generating dialog responses (Dinan et al., 2019b).",
"The dataset contains 201K utterances from 22K dialogues spanning 1300 diverse topics, from which we use only the test set.",
"The associated Wikipedia knowledge base has 5.4M articles and 93M sentences.",
"Baselines for MultiWOZ.",
"For MultiWOZ, we consider several baselines following (Sun et al., 2021) for knowledge injection.",
"First, we use the current state-of-the-art model, SimpleTOD, for goal-oriented dialog (Hosseini-Asl et al., 2020).",
"Sun et al. (2021) extends SimpleTOD by adding chitchat candidates to dialog histories during training.",
"They also have other variants that either concatenate output from SimpleTOD and candidate chitchats (Arranger) or rewrite by combining both output and chitchat snippets (Rewriter).",
"We also have a trivial baseline (KCopy) which appends the retrieved knowledge snippet k from POKI with the initial response x d .",
"Baselines for WoW.",
"For WoW, we use two current-best knowledge-grounded models, KGround (Wolf et al., 2019) and BART (Lewis et al., 2020a) that concatenate the associated knowledge snippets (present in WoW) and the dialog history as inputs to generate the response with supervision.",
"KGuide (Zhao et al., 2017) and RAG (Lewis et al., 2020b) have an additional knowledge selection step modeled by a latent variable before response generation similar to knowledge grounded models.",
"We also use the KCopy baseline, as described for MultiWOZ.",
"Variants of POKI.",
"To investigate the impact of various decoding constraints in POKI, we consider the following two variants of POKIw/o Entailment and w/o Knowledge (Kw) Fidelity ( 3.2).",
"In POKI, we use SimpleTOD as the base dialog model in goal-oriented scenarios and use BART (which is a state-of-the-art model for WoW) as the base dialog model in the knowledge-grounded scenario.",
"For all variants of POKI, we use gradient-based inference for decoding the final response.",
"Arguably, a system which can effectively leverage additional knowledge at decoding time should generate more diverse responses.",
"We measure percentage of distinct bigrams as Distinct-(D-2) (Li et al., 2016) and geometric mean of entropy values of empirical frequency distributions of n-grams ( n = 1 , 2 , 3 ) as Entropy (ENTR) (Jhamtani et al., 2018) for diversity.",
"Additionally, we report overlap between generated responses and corresponding ground truth as per BLEU and BERTScore (BRTSc).",
"For MultiWOZ, we also report the final goal accuracy (Acc) following (Hosseini-Asl et al., 2020).",
"MultiWOZ.",
"Table 1 shows POKI outperforms all the baselines in terms of diversity of generated responses.",
"More importantly, we see POKI promotes accuracy of reaching the final dialog state i.e. the goal.",
"For ablated versions of POKI, we find the entailment constraint has little effect on diversity while dropping the knowledge adherence constraint negatively influences accuracy and diversity.",
"All variants of SimpleTOD and all versions of POKI show departure from the results obtained by SimpleTOD on BLEU and BERTScore since all of these versions add external knowledge that were not explicitly present in the data.",
"However, we observe that the departure is not significant and POKI achieves a much closer BERTScore to SimpleTOD compared to baselines.",
"WoW.",
"Despite all systems for WoW use knowledge explicitly in the knowledge-grounded dialog generation task, Table 2 shows POKI generates the most diverse responses.",
"Similar to MultiWOZ, the knowledge adherence constraint still remains a significant factor for increasing diversity, one of the main goals of knowledge injection.",
"For WoW, we instead see POKI outperform even BART (pre-vious SOTA) in terms of BERTScore when injected with external knowledge indicating the need of the external knowledge for modeling WoW dialogs.",
"We conduct a comparative human evaluation with 300 samples to evaluate the quality of generated dialog responses following ACUTE-Eval (Li et al., 2019).",
"We show a generated response from POKI to an annotator with its associated dialog history to annotate if knowledge injection makes the final response more engaging , interesting and humanlike compared to a baseline response.",
"As sanity check, we also investigate if the response remain coherent after knowledge injection.",
"Each sample is evaluated by two annotators 3 .",
"MultiWOZ.",
"Table 3 records the pairwise comparison showing POKI consistently outperforms baselines on all criteria.",
"Responses from POKI are more engaging and interesting compared to SimpleTOD and Rewriter, demonstrating that gradient-based decoding is effective for knowledge injection.",
"In POKI, entailment constraint mostly influences coherence whereas knowledge fidelity constraint is important for engagingness and interestingness.",
"WoW.",
"Table 3 shows POKI outperforms baselines that use grounding knowledge during training in all criteria showing that external knowledge can be useful even in the knowledge-grounded setting to make the conversation engaging and interesting.",
"It also indicates the limitation of the training signal or lack of access to sufficient knowledge and 3 More details of the setup are in Appendix C. 3145 : Center of the town in Cambridge.",
"A large gap in win percentages in favor of POKI for evaluating how humanlike' is a response when compared to state-of-the-art methods suggests knowledge injection leads to more natural conversation.",
"Here too, both decoding constraints show similar trends to MultiWOZ.",
"Qualitative Analysis.",
"Figure 3 shows a conversation by POKI with a user who seeks to find restaurant options around Cambridge.",
"We observe that in most of the turns the injected knowledge appeared as an additional justification over the initial responses making the dialog engaging and effective to reach the user's goal (also noted by human judges in 5.3).",
"For example, in turn 3, we observe that adding the extra information about Indian cuisine helped user to reach a conclusion when their original choice of English cuisine was absent.",
"Effect of Response Length.",
"Qualitatively, as seen in Figure 3, responses generated by POKI are longer than those from the initial response due to the post-hoc knowledge injection.",
"In the human evaluation sample, we found that 37% of responses from POKI are similar or smaller in length compared to responses from the best baseline.",
"We investigate if response length acted as a confounding factor during human evaluation.",
"Among all the cases where POKI was lost over a baseline, 45% ( 2% when bootstrapped with 1000 subsets of size 50) of responses from POKI were longer than those from the comparing baseline.",
"Among win cases for POKI, we observe 49% ( 3% when bootstrapped with 1000 subsets of size 50) POKI responses were longer than those from the comparing method.",
"This indicates that human users did not only choose longer responses as better.",
"Relevant knowledge injection has the benefit of adding more justification to terse dialog outputs and hence influencing the task outcome positively.",
"Mirroring observations from (Ghandeharioun et al., 2019), a real-time full conversation evaluation is needed to investigate if POKI could achieve the conversational goal any better than baselines.",
"We recruited 60 users for this study 4 .",
"One half of the users interacted with POKI, while the other half interacted with the best baseline model that does not augment dialog responses with external knowledge.",
"We construct a speculative goal for each user to accomplish via the conversation.",
"We allow users to end the conversation any time they would like and ask them whether the system helped them to reach their conversation goal along with additional comments to justify their annotation.",
"Users who interacted with a knowledge-augmented system also asked if the system provided any knowledge that user has not explicitly asked for but indeed the extra information helped them to reach the conversational goal (Majumder et al., 2021b).",
"Finally, we also ask if they would like to engage with the system they interacted with in future.",
"For goal-oriented dialog, we construct speculative goals (e.g. looking for entertainment options) manually from the ground truth for 300 dialog samples.",
"Since we are not using the underlying databases, we made sure speculative goals do not require specific information (e.g. booking availability, flight information, etc.).",
"For knowledge-grounded dialog, we provide the intended topic of 4 More details of the participants and the study setup are in Appendix C. 3146 MultiWOZ # turns Goal Know Would use Rewriter 8 2 69% 35% 56% POKI 4 3 86% 84% 76% WoW # turns Goal Know Would use BART 10 2 56% 70% 48% POKI 16 3 76% 89% 71% Table 4: Real-time user study with average # of turns for successful goal completion, % of time the goal was achieved, % of success cases users were helped by an additional knowledge (Know) that was not explicitly asked to reach their goal, and if users would like to use the system in future.",
"discussion (e.g. science fiction) present in the data; the speculative goal here is to know more about, or to have an engaging conversation about the topic.",
"Results.",
"First of all, we find that POKI is unanimously preferred by users compared to the baseline during the user study.",
"More importantly, we see that when the user successfully accomplished their goal, 84% of those times they found the additional knowledge helpful in the goal-oriented setting (MultiWOZ) as compared to a baseline (Rewriter) that did not use any external knowledge.",
"Most importantly, POKI takes significantly fewer turns for users to accomplish the goal as compared to Rewriter implicitly indicating injected knowledge (we observe high correlation, 0.67) contributes toward more efficient conversations.",
"For the knowledge-grounded setting (WoW), both BART and POKI have access to external knowledge sources.",
"However, 89% (compared to 70%) of success scenarios were directly influ-enced by the additional post-hoc knowledge.",
"For knowledge-grounded dialog, a longer conversation is indicative of engagingness on a particular topic (Gopalakrishnan et al., 2019), hence users preferred to converse with POKI for more turns as compared to a BART baseline.",
"We quote a comment from a user who found a conversation about the Korean culture with POKI was particularly engaging Before this conversation, I had less knowledge about Korean movies and art-forms. This gave me a new perspective and a handful of popular opinions to look at it. .",
"knowledge selection step in POKI acts an information bottleneck where the quality of the generated response directly depends on the quality of the",
"We perform a human evaluation on 200 snippets to measure the relevance and the factual correctness in two scenarios: when we randomly select a retrieved snippet or select via DPP.",
"In Table 5, we see that the parametric knowledge source ( gpt2-large ) generates more relevant knowledge snippets than a non-parametric one.",
"We attribute this to",
"1) a large and diverse dataset (webtext) used during pretraining of gpt2 as compared to yelp reviews (restricted domains) we used for retrieval, and",
"2) the limited recall of relevant knowledge when using word-overlap based retrieval.",
"However, large language models are still prone to generate non-factual knowledge.",
"We observe that DPP-based selection in POKI is able to sub-select more factual knowledge which then positively influences the final response quality.",
"For WoW, we also compare the selected snippets with the gold knowledge available in the dataset that in turn show high fidelity in terms of BERTScore.",
"Time Complexity.",
"Madotto et al. (2020) shows that iterative gradient-based decoding could be slower than generating response using single forward pass from an existing model.",
"When we benchmark POKI in an Nvidia 2080Ti GPU, in Table 6, we see that knowledge generation (or retrieval) could be a computational bottleneck for POKI.",
"However the greedy selection and the constrained decoding step do not add significant computational load.",
"Furthermore, POKI's performance is comparable with PPCM (Madotto et al., 2020)a more efficient version of gradient-based decoding.",
"The efficiency of the knowledge retrieval step can be improved with better indexing (Johnson et al., 2021) which we leave as a future work.",
"Knowledge grounded dialog datasets such as Wizard-of-Wikipedia (Dinan et al., 2019a) and Topical chat (Gopalakrishnan et al., 2019) typically consist of dialog responses paired with relevant knowledge available as collected annotations.",
"Hence, models trained on such datasets are restricted to the knowledge sources they were exposed to at training time.",
"Past work (Sun et al., 2021; Majumder et al., 2020a; Su et al., 2020; Komeili et al., 2021; Adolphs et al., 2021; Ghazvininejad et al., 2018; Tuan et al., 2020; Lewis et al., 2020c; Guu et al., 2020) has looked into injecting extra knowledge sources at training time in a bid to add knowledge not available originally as paired to dialog responses.",
"However, such approaches require re-training the model if some new knowledge source were to be used.",
"Moreover, while previous work focuses on just improving specificity of dialog response using external knowledge, we also study the effect of additional knowledge in achieving conversational goals.",
"Improving the diversity of dialog responses by using diversity-promoting sampling has been explored in past work (Fan et al., 2018; Holtzman et al., 2020).",
"We use a gradient-based decoding method, building on past work in this direction (Dathathri et al., 2020; Qin et al., 2020; Madotto et al., 2020; Majumder et al., 2021a).",
"However, we propose new objectives to inject post-hoc knowledge obtained based on already generated dialog an unsupervised knowledge injection method that has not been explored so far.",
"We propose a framework for unsupervised knowledge injection into dialog responses.",
"We show that knowledge can be obtained post-hoc from any knowledge sources that can improve users' ability to reach their conversational goal more effectively.",
"In future, our idea can be generalized to setups where external knowledge can justify model's predictions such as conversational recommendation.",
"We thank anonymous reviewers for providing valuable feedback.",
"BPM is partly supported by a Qual-comm Innovation Fellowship, a Friends of the International Center FellowshipUC San Diego, NSF Award #1750063, and MeetElise."
] | [
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"objective",
"result",
"result",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"other",
"method",
"objective",
"objective",
"objective",
"abstain",
"other",
"other"
] |
[
"A desirable dialog system should be able to continually learn new skills without forgetting old ones, and thereby adapt to new domains or tasks in its life cycle.",
"However, continually training a model often leads to a well-known catastrophic forgetting issue.",
"In this paper, we present Continual Prompt Tuning, a parameter-efficient framework that not only avoids forgetting but also enables knowledge transfer between tasks.",
"To avoid forgetting, we only learn and store a few prompt tokens' embeddings for each task while freezing the backbone pre-trained model.",
"To achieve bi-directional knowledge transfer among tasks, we propose several techniques (continual prompt initialization, query fusion, and memory replay) to transfer knowledge from preceding tasks and a memory-guided technique to transfer knowledge from subsequent tasks.",
"Extensive experiments demonstrate the effectiveness and efficiency of our proposed method on continual learning for dialog state tracking, compared with state-of-the-art baselines.",
"Recently, most studies have focused on developing dialog systems for specific domains in an offline manner, assuming the data distribution stays the same.",
"However, this is far from realistic because a deployed dialog system is often required to support new domains and provide more services constantly over time.",
"Therefore, it is crucial for a dialog system to continually learn new tasks without forgetting old ones with high efficiency.",
"Previous studies on continual learning (Kirk-patrick et al., 2017; Li and Hoiem, 2018) mainly focused on solving the catastrophic forgetting (CF) problem (McCloskey and Cohen, 1989): when a neural model is trained on a sequence of tasks, new tasks may interfere catastrophically with old tasks.",
"Simply storing a model version for each task to * Corresponding author.",
"26C2A1 Figure 1: An illustration of Continual Prompt Tuning .",
"We train a soft prompt for each task and freeze the pre-trained model.",
"Several techniques are proposed to transfer knowledge from preceding tasks (green solid arrows) and subsequent tasks (red dashed arrows).",
"mitigate forgetting is prohibitive as the number of tasks grows, especially when the model size is large.",
"To mitigate catastrophic forgetting with low computation and storage overhead, recent methods freeze the backbone model and propose to train a weight/feature mask (Mallya et al., 2018; Geng et al., 2021) or an adapter (Madotto et al., 2021) for each task independently.",
"However, the techniques above are still not efficient enough, and they largely ignore knowledge transfer among tasks.",
"In this paper, we develop prompt tuning (Lester et al., 2021) for continual learning.",
"We freeze the backbone pre-trained model and train a few prompt tokens' embeddings for each task, which is highly parameter-efficient to avoid forgetting.",
"As illustrated by yellow components in Figure 1, we concatenate the input with a few tunable task-specific prompt tokens before feeding it to a frozen pre-trained model.",
"Since these prompt tokens have only a small number of parameters (0.1% of the pre-trained model's parameters in our experiments), we can efficiently train and store the prompt for each task.",
"During inference, the same pre-trained model can handle different tasks by inputting different prompts, which is friendly for deployment.",
"Unlike the vanilla approach of training each task's prompt from scratch and fixing it afterward, we propose Continual Prompt Tuning , a framework that enables knowledge transfer between tasks.",
"We consider transferring knowledge from both preceding tasks (forward) and subsequent tasks (back-ward).",
"To realize forward transfer, we propose several techniques, including continual prompt initialization, query fusion, and memory replay (green solid arrows in Figure 1).",
"To achieve positive backward transfer, we propose a memory-guided technique that uses subsequent tasks' data to update the previous tasks' prompts selectively (red dashed arrows in Figure 1).",
"We conduct experiments on Dialog State Tracking (DST), a core component of a dialog system, using the Schema-Guided Dialog dataset (Rastogi et al., 2020).",
"The model continually learns new services that have multiple slots to fill.",
"We concatenate all slots' descriptions with the input and insert a sentinel token after each description, formulating DST as a masked spans recovering task, which is similar to the pre-training objective of T5 (Raffel et al., 2020).",
"We empirically show that our proposed framework effectively outperforms state-of-the-art baselines on continual learning for DST, and is extremely efficient in terms of computation and storage.",
"1 To summarize, our main contributions are: 1. For the first time, we develop prompt tuning for continual learning, which avoids forgetting efficiently and is friendly for deployment.",
"2. We investigate several techniques for forward and backward knowledge transfer based on prompt tuning, further boosting the continual learning performance.",
"3. Our experiments on continual DST demonstrate the superior performance and efficiency of our proposed method.",
"Continual Learning (CL) studies the problem of continually acquiring knowledge from a data stream and reusing it for future learning while avoiding forgetting.",
"Three kinds of CL methods have been developed.",
"Rehearsal methods store and replay some training samples from previous tasks (Rebuffi et al., 2017; Lopez-Paz and Ranzato, 2017).",
"Regularization methods apply additional loss to aid knowledge consolidation (Kirkpatrick 1 Code and data are publicly available at https:// github.com/thu-coai/CPT4DST et al., 2017; Li and Hoiem, 2018).",
"Architectural methods introduce task-specific parameters for new tasks and fix parameters for old tasks to prevent forgetting, to which our method belongs.",
"Previous architectural methods include dynamic expanding network structure (Rusu et al., 2016), iterative network pruning and re-training (Mallya and Lazeb-nik, 2018), learning a parameter mask for each task individually (Mallya et al., 2018), etc.",
"For continual learning in dialog system, variants of general CL methods have been applied (Lee, 2017; Shen et al., 2019; Wu et al., 2019; Mi et al., 2020; Geng et al., 2021).",
"AdapterCL (Madotto et al., 2021) is the most related to our work, which freezes the pre-trained model and learns an adapter (Houlsby et al., 2019) for each task independently.",
"Compared with AdapterCL, our method is more parameter-efficient, and we explore the effect of both forward and backward transfer.",
"Recent studies have found that using a textual prompt to convert downstream tasks to the language modeling task is a more effective way to use pre-trained language models than typical fine-tuning (Brown et al., 2020; Schick and Schtze, 2021).",
"Prompts can be manual designed (Petroni et al., 2019) or generated automatically (Shin et al., 2020; Jiang et al., 2020; Gao et al., 2021).",
"Since searching prompts in discrete spaces is sub-optimal, some works (Qin and Eisner, 2021; Liu et al., 2021; Han et al., 2021) combine hard text prompts and soft prompts whose embeddings are learned through back-propagation.",
"Lester et al. (2021) show that freezing the pre-trained model and only tuning soft prompts, known as prompt tuning, is parameter-efficient and becomes more competitive with fine-tuning as the model size grows.",
"Prompt tuning differs from embedding adapter (Zhu et al., 2021) that aims to address the multilingual embedding deficiency.",
"An embedding adapter transforms all tokens embeddings but do not affect transformer layers' computation, while prompt tuning does not change tokens embeddings but adds new tunable prompt tokens to the input, serving as context and affecting all following transformer layers.",
"Gu et al. (2021) and Vu et al. (2021) further explore the transferability of soft prompts across tasks.",
"While they investigate one-step adaptation, we are interested in prompt transfer in the continual learning setting.",
"Dialog State Tracking (DST) aims to capture user goals in the form of (slot, value) pairs.",
"Traditional ontology-based classification methods (Mrkic et al., 2017; Lee et al., 2019) require access to all candidate values.",
"To alleviate the reliance on the ontology and improve generalization to unseen values, some work extract values from a dialog context (Xu and Hu, 2018; Gao et al., 2019) while others generate values directly to handle situations where values are missing from the context (Wu et al., 2019; Hosseini-Asl et al., 2020).",
"Generation-based models either generate all (slot, value) pairs in one pass (Hosseini-Asl et al., 2020; Madotto et al., 2021) or generate value for each given slot separately (Wu et al., 2019).",
"The former are more efficient but can only predict in-domain slots and lack transferability while the latter can incorporate more information about a slot as a query, such as a brief natural language description (Rastogi et al., 2020), slot type information (Lin et al., 2021), possible values (Lee et al., 2021), and the task definition and constraint (Mi et al., 2022).",
"Our proposed method integrates multiple slot descriptions into a single query and generates all values in one pass, which improves performance without losing efficiency.",
"The goal of continual learning is to sequentially learn a model f : X T Y from a stream of tasks T 1 ...",
"TT that can predict the target y given the input x and task T k T .",
"We denote the data for each task T k as D k .",
"Our method is based on pre-trained language models.",
"Instead of fine-tuning a pre-trained model in a traditional manner (Figure",
"2(a)), we freeze the model but \"reprogram\" it to solve task T k by adding m new soft prompt tokens P k = P 1 k P 2 k ...P mk to the textual input and tuning the embeddings of P k only.",
"Since the prompt's parameters are much less than the model's, we save P k for each task to avoid forgetting.",
"We treat each service/API as a task in continual DST (service and task are used interchangeably).",
"To incorporate informative slot descriptions and ease the decoding process, we convert the descriptions into a query with masked spans and formulate DST as a masked spans recovering task (Sec. 3.2).",
"To enhance knowledge transfer between tasks, we propose continual prompt initialization, query fusion, and memory replay for forward transfer (Sec. 3.3) and explore a memory-guided technique for backward transfer (Sec. 3.4).",
"In DST, each service T k has a set of pre-defined slots S k = { s 1 , ..., s n k } to be tracked.",
"The input x is a dialog and the output y consists of slot-value pairs: { ( s 1 , v 1 ) , ( s 2 , v 2 ) , ..., ( s n k , v n k ) } .",
"Similar to many NLP tasks, DST can be formulated as a text-to-text generation task.",
"Formally, we define a function g k : X Y V V for each service T k to transform the original data ( x, y ) to: x, y = g k ( x, y ) (1) where V is the vocabulary and x, y are texts that serve as the model input and output, respectively.",
"For example, x can be the concatenation of x and service name, while y is a sequence of slot-value pairs (Madotto et al., 2021) (Figure",
"2(a)).",
"Previous research has shown that incorporating a natural language description d i for each slot s i is beneficial (Lin et al., 2021; Lee et al., 2021).",
"They concatenate the dialog x with each slot description d i and decode the value v i independently.",
"However, separately decoding is inefficient, especially when there are many slots.",
"To solve this, we concatenate all slot descriptions and insert a sentinel token after each description to form a query added to the input, formulating DST as a masked spans recovering task that generates all slot values in one pass: x = [ x ; Q k ; P k ] Q k = d k 1 : (cid:104) M 1 (cid:105) . ... d kn k : (cid:104) M n k (cid:105) . y = (cid:104) M 1 (cid:105) v k 1 ... (cid:104) M n k (cid:105) v kn k",
"(2)",
"where [ ; ] is the concatenation operation and",
"(cid:104)",
"M",
"(cid:105)",
"are distinct sentinel tokens representing masked spans.",
"The query Q k contains all n k slot descriptions for task T k with n k masked spans and y contains corresponding slot values leaded by the sentinel tokens.",
"If the value of a slot can not be inferred from the input, we set it to \"None\".",
"We freeze the pre-trained model's parameters and only optimize the prompt's parameters P k for each service T k .",
"The loss function is: L Pk",
"(a) Fine-tuning takes the dialog and current service's name as input and tunes T5 to generate slot-value pairs.",
"(b) Continual Prompt Tuning feeds the dialog, query consisting of slot descriptions and sentinel tokens, and prompt tokens to frozen T5 and tunes the prompt's embeddings to generate values for all slots in the query.",
"Continual prompt initialization, query fusion, and memory replay are proposed to enhance forward transfer while subsequent services' data will be used for backward transfer.",
"We show an example dialog, service name, fused query, and expected outputs.",
"Slot names and descriptions are in italic and values are underlined.",
"Note that the second slot description in the query belongs to another service (\"banks\") and is inserted by query fusion.",
"Prompt Pre-trained Model",
"However, when training on the current task, there is only one query that consists of the slot descriptions of that task in a fixed order, which may hinder the model from learning the general skill.",
"Therefore, we propose to augment the query by mixing slot descriptions from the current and previous tasks to help the prompt better understand the correspondence between slot descriptions and values.",
"We fuse the query Q k with previous tasks' queries { Q j } j<k for each sample, including three steps: 1) sample n 1 slots from S k randomly, where n 1 is sampled from [1 , |S k | ] uniformly.",
"2) sample n 2 slots from previous tasks' slots (cid:83) i<k S i randomly, where n 2 is sampled from [1 , n 1 ] uniformly.",
"3) combine the above n 1 and n 2 slots' descriptions in a random order as new Q (cid:48) k , and modify y accordingly.",
"Note that some original slots are dropped, and values for added slots are set to \"None\".",
"Input Task 1 transform",
"3.3 Forward Transfer Reusing the knowledge acquired from preceding tasks often improves and accelerates the learning on future tasks.",
"Therefore, we propose three types of techniques for forward transfer that can be employed in combination.",
"3.3.1 Continual Prompt Initialization An intuitive way to transfer knowledge is parameter initialization.",
"We explore two continual prompt initialization strategies.",
"CLInit uses last task's prompt P k 1 to initialize current task's prompt P k .",
"SelectInit evaluates all { P j } j<k on the validation set of T k without training and selects the one with the lowest loss to initialize P k .",
"The initial prompt of CLInit has been continually trained on all previous tasks, while SelectInit only considers the most relevant task without interference from its subsequent tasks.",
"We empirically compare these two strategies in Sec. 5.3.",
"We hope the model can learn to generate values according to any slot descriptions, which is a general skill that may improve performance on future tasks.",
"Previous studies (Rebuffi et al., 2017; Lopez-Paz and Ranzato, 2017) store a few samples for each task and replay them when training on new tasks to mitigate forgetting.",
"Since our prompt tuning framework has already resolved forgetting, we focus on how these samples benefit the current task.",
"We assume we can store | M | samples for each task ( | M | should be small) and denote M i as the memory for task T i .",
"When a new task T k comes, we optimize P k on D k and M <k = (cid:83) i<k M i jointly, changing the loss function to L Pk ( D k + M <k ) .",
"When combined with query fusion, query Q i for samples in the memory M i are also fused with queries { Q j } j k,j (cid:54) = i from other seen tasks, including the current task.",
"Note that in this way, samples from other tasks can be viewed as \"positive\" samples to those added slots in Q (cid:48) i since these samples may have not \"None\" values for those added slots.",
"Although fixing P k immediately after training on task T k can avoid forgetting, it also blocks the backward knowledge transfer from future tasks.",
"Motivated by Chaudhry et al. (2019), we explore whether it is possible to improve the performance on previous tasks with the help of memory when a new task comes.",
"Specifically, for each previous task T i , i < k , we initialize a new prompt P ( k ) i to P i and trained it on current task's data D k with memory M i as regularization.",
"During training, we sample a batch from D k and a batch from M i synchronously and denote the gradient from each batch as g ori and g ref , respectively.",
"We decide the gradient for update according to the angle between g ori and g ref : g = (cid:40) g ori , if g Tori g ref > 0 0 , otherwise (4) which means we abort the update that will increase the loss on memory batch.",
"We empirically find that this simple abortion is better than projecting g ori onto the normal plane of g ref (Chaudhry et al., 2019).",
"After training, we update P i to P ( k ) i if P ( k ) i obtains lower loss and better (or equal) performance on M i than P i .",
"Recently, Madotto et al. (2021) proposed a continual learning benchmark for task-oriented dialog systems and compared several classic CL methods.",
"We adapt their data processing steps and baselines in our experiments.",
"We conduct experiments on Schema-Guided Dialog dataset (SGD) (Rastogi et al., 2020) that has 44 services over 19 domains.",
"It also provides a one-sentence description for each slot.",
"We treat each service as a task and only consider dialogs involving a single service.",
"We randomly split a service's dialogs into train/val/test sets at the ratio of 7:1:2.",
"The number of training samples of each service ranges from 112 to 4.7K, and there are 2 to 10 slots for one service.",
"More details about data statistics can be found in the Appendix (Table 8).",
"We evaluate DST performance using the widely adopted Joint Goal Accuracy (JGA) (Wu et al., 2019), which requires all slots' values are correctly predicted.",
"We assign the target service during testing to avoid ambiguity since the same dialog can be parsed differently under different services.",
"We denote a j,i as the JGA on the test set of task T i right after training on task T j .",
"We evaluate the CL performance as the average JGA on all tasks after training on the final task TT : Avg .",
"(6)",
"FWT is the averaged zero-shot performance on new tasks, evaluating a model's generalization ability.",
"BWT assesses the impact that learning on subsequent tasks has on a previous task.",
"Negative BWT indicates that the model has forgotten some previously acquired knowledge.",
"We adopt the following models from Madotto et al.",
"(2021) as baselines: Fine-tuning : Fine-tune the model on new task data continually.",
"Replay : Save | M | samples randomly sampled from the training set of each task T i to memory M i and jointly train the model on new task data D k and memory M <k .",
"EWC : Maintain the memory in the same way as Replay but use it to compute the Fisher information matrix for regularization (Kirkpatrick et al., 2017).",
"AdapterCL : Freeze the pre-trained model and train a residual Adapter (Houlsby et al., 2019) for each task independently (Madotto et al., 2021).",
"Above methods use the same input and output format as in Figure",
"Prompt tuning based methods including our proposed Continual Prompt Tuning are list below: Prompt Tuning : Formulate DST as a masked spans recovering task (Sec. 3.2) and only tune the prompt for each task independently.",
"Multi-task Prompt Tuning : Prompt Tuning in a multi-task manner instead of CL.",
"Train a single prompt using all tasks' data concurrently.",
"Continual Prompt Tuning : Prompt Tuning with CLInit (Sec. 3.3.1) and query fusion (Sec. 3.3.2).",
"w/ memory with memory replay (Sec. 3.3.3).",
"w/ memory & backward with memory replay and memory-guided backward transfer (Sec. 3.4).",
"We use the following setting in the experiments unless otherwise specified.",
"Training task sequences Since a sequence of all (44) tasks is too long for the evaluation purpose, we conduct most of the experiments on 15 tasks chosen at random to save computing resources.",
"We run AdapterCL , Prompt Tuning , and Multi-task Prompt Tuning 5 times with different random seeds because they are agnostic to task order.",
"The FWT and BWT metrics for these models are left blank.",
"We run other methods in the same 5 task orders created by random permutation.",
"The selected tasks and ordering are listed in the Appendix (Table 9).",
"Hyper-parameters We use T5-small as the backbone model and reuse its sentinel tokens (Raffel et al., 2020).",
"For each task, Continual Prompt Tuning first trains 10 epochs with fused query (and using memory if available) for forward transfer.",
"Afterward, it concentrates on the current task and continues training 10 epochs on the original data of the current task.",
"When using backward transfer, we train 5 epochs for each previous task.",
"Other methods train 20 epochs for each task.",
"We use AdamW and set the learning rate to 3e-5 for Fine-tuning , Replay , and EWC , 3e-3 for AdapterCL , and 0.5 for all prompt tuning based methods.",
"We set the batch size to 16 for prompt tuning based methods and 8 for other methods.",
"To avoid overfitting, we perform early stopping if validation performance does not improve for 5 consecutive epochs.",
"The weight for EWC regularization loss is 0.01.",
"We set the memory size | M | to 50 for each task and save the same samples for all methods that require memory.",
"We initialize prompt tokens with the tokens randomly drawn from the vocabulary.",
"For prompt tuning based methods, we tune 100 soft prompt tokens with the embedding size 512 for each task, resulting in 51.2K parameters.",
"To compare parameter efficiency, we adjust AdapterCL 's parameters for each task to be nearly 1x or 20x as ours.",
"The experiments are organized as follows.",
"We compare our method with baselines in Sec. 5.1, and present a comprehensive ablation study in Sec. 5.2.",
"We investigate the effect of prompt initialization in Sec. 5.3, and the effect of model size and prompt length in Sec. 5.4.",
"Computation Resource Analysis.",
"In CL, there is a trade-off between performance and computation resources.",
"Ideally, we hope to utilize the least amount of computation resources to achieve the best performance.",
"We take three vital resources into our consideration.",
"Memory saves previous tasks' samples, which may involve privacy issue and requires extra storage.",
"Additional parameters are the extra parameters we add to our model to cope with different tasks along the CL process, which should be kept to a minimum in order to scale to long task sequences.",
"Tunable parameters are the trainable parameters when we learn a task, which is important for GPU memory and computation.",
"We show the usage of these resources in Table 1 (right).",
"Replay stores | M | samples for each task and does not need extra parameters.",
"EWC saves the Fisher information matrix and original parameters, requiring two times additional parameters.",
"AdapterCL , Prompt Tuning , and Continual Prompt Tuning require no memory and only add a small number (2% or 0.1%) of additional parameters for each task, largely reducing the computational and storage overhead.",
"Apart from the vanilla form, Continual Prompt Tuning can also utilize the memory if available.",
"Consistent with Madotto et al. (2021), both Fine-tuning and EWC suffer from catastrophic forgetting while replaying memory can alleviate the problem to a large extend.",
"Fine-tuning and EWC have a low Avg.",
"JGA because of the large negative BWT, while Replay improves BWT a lot thus has a high Avg.",
"JGA.",
"Our proposed Prompt Tuning with masked spans recovering is more parameter efficient than AdapterCL .",
"In terms of Avg.",
"JGA, Prompt Tuning is much better than AdapterCL with the same size and comparable to AdapterCL with 20x parameters.",
"Forward transfer through CLInit and query fusion is effective for Prompt Tuning .",
"Continual Prompt Tuning improves over Prompt Tuning significantly and outperforms baselines.",
"When memory is available, our method achieves the best results",
"w.r.t. all metrics, closing the gap between CL and multi-task learning.",
"Memory improves zero-shot performance (FWT) on new tasks as Replay is better than Fine-tuning and Continual Prompt Tuning w/ memory is better than without memory.",
"Our memory-guided backward transfer effectively utilizes subsequent tasks to help previous tasks.",
"Although minor, Continual Prompt Tuning w/ memory & backward is the only method that exhibits positive BWT.",
"To understand the effect of different proposed techniques, we conduct an in-depth ablation study and show the result in Table 2. Row 1 and 2 do not formulate DST as a masked spans recovering (MSR) task: the input is the concatenate of the dialog, service name, and soft prompt, while the output is a sequence of slot-value pairs as in Fine-tuning (Fig-ure",
"2(a)).",
"Several interesting observations can be noted: First , formulating DST as MSR is benefi-MSR CLInit QF MR Avg.",
"cial.",
"Using MSR achieves better CL performance regardless of learning each task independently (row 3",
"v.s.",
"row 1) or continually using CLInit (row 4",
"v.s.",
"row 2).",
"Besides, MSR formulation improves zero-shot generalization on new tasks (row 4",
"v.s.",
"row 2).",
"Second , forward transfer through CLInit brings large improvement for CL.",
"CLInit outperforms random initialization greatly for both using MSR formulation (row 4",
"v.s.",
"3) and not (row 2",
"v.s.",
"1).",
"Third , both query fusion and memory replay are effective.",
"When they are used separately, memory replay (row 6) boosts the performance more than query fusion (row 5), while applying them altogether achieves the best performance (row 7).",
"In this experiment (Table 3), we compare CLInit with other prompt initialization strategies for Prompt Tuning in CL.",
"SelectInit (see Sec. 3.3.1) Initialization Avg.",
"selects the prompt that has the best zero-shot performance on the current task from all previous tasks' prompts for initialization.",
"We could see that both SelectInit and CLInit outperform random initialization significantly, demonstrating the effectiveness of transferring knowledge from previous tasks through prompt initialization.",
"CLInit is slightly better than SelectInit in both Avg.",
"JGA and zero-shot generalization (FWT), which reveals the benefit of accumulating knowledge from all seen tasks.",
"In contrast, the prompt initialized by SelectInit has seen fewer tasks and thus contains less knowledge, which might explain the slightly worse result.",
"Based on the observation above, we further study that whether seeing more preceding tasks further helps CLInit.",
"To this end, we choose a task order of all 44 tasks at random (see Table 8 in the Appendix) and perform Prompt Tuning with CLInit on the last 5, last 15, last 30, and all 44 tasks separately.",
"Formally, we train on four CL curriculums T 40:44 , T 30:44 , T 15:44 , and T 1:44 , which have the same ending.",
"We calculate the Avg.",
"JGA on the T 40:44 , T 30:44 , and T 15:44 if possible.",
"As illustrated in Table 4, performance on the same tasks (in the same column) increases monotonously as the number of preceding tasks grows.",
"This pattern validates that the benefit of CLInit becomes more evident as the number of tasks increases.",
"This finding suggests that our method is suitable for long task sequences.",
"In this experiment, we analyze the influence of pre-trained model size and prompt length.",
"We vary the pre-trained model in {T5-small, T5-base, T5-large} and prompt length in {1, 5, 20, 100, 150} for Continual Prompt Tuning on the 15 tasks (the task order is in Table 9 in the Appendix).",
"Figure 3 shows Avg.",
"JGA and Table 5 shows FWT.",
"We can observe that: First , when fixing the prompt length, increasing the model size improves the Avg.",
"JGA as well as the generalization ability measured by FWT in most cases.",
"Second , when the backbone model size is fixed, increasing the prompt length improves the overall performance in general.",
"Furthermore, we found that increasing prompt token length from 20 to 100 improves Avg.",
"JGA and FWT more than increasing it from 100 to 150, which is consistent with the finding in Lester et al. (2021).",
"Third , our method becomes more parameter-efficient as the backbone model size grows.",
"With the same number of tunable parameters (x-axis), using a larger pre-trained model achieves better Avg.",
"JGA.",
"In this section, we compare the role of memory in Replay and our method.",
"We vary the memory size per task | M | in {10, 50, 100} and show the performance of Replay and Continual Prompt Tuning with memory replay (and memory-guided backward transfer) in Table 6.",
"We can find that increasing the memory size benefits Replay significantly.",
"This is not surprising because Replay and other rehearsal methods rely on memory to solve the challenging forgetting problem.",
"When the memory size is unlimited, Replay degenerates to multi-task learning, which is powerful but costly in storage and computation.",
"For Continual Prompt Tuning , however, the memory is not used for retaining the performance on previous tasks since parameters for previous tasks are saved.",
"In forward transfer, the memory helps recall previous tasks' knowledge and serves as a complement to CLInit and query fusion.",
"The influence on Avg.",
"JGA depends on the effect of transfer learning on the current task via multi-task training ( L Pk ( D k + M <k ) ).",
"As shown in the row 2 in Table 6, increasing the memory size does not improve Avg.",
"JGA significantly and may even distract the model from learning the current domain.",
"This result suggests that our method does not need a large memory for forward transfer.",
"In backward transfer, the memory gives reference gradients to guide the updates and serves as a filter to decide whether to accept the updates.",
"Thus larger memory gives more accurate guidance.",
"From the bottom row in Table 6, we can find that increasing memory size can improve the effect of backward transfer.",
"We also conduct experiments using a percentage memory budget, setting the memory size for each task proportional to task data size: | M i | | D i | .",
"This means low-resource tasks have fewer samples Memory Size fixed = 50 proportional Replay 58.6 3 .",
"stored in the memory than in the original setting.",
"We set the total memory size to 50 * T, where T is the number of tasks.",
"As shown in Table 7, Replay performs much worse (58.6 55.8) in the unbalanced task memory setting while the effect on Continual Prompt Tuning w/ mem.",
"is slight (60.7 60.3).",
"Besides, our proposed backward transfer technique is still effective.",
"Overall, these results indicate that compared with Replay , our method uses the memory differently and benefits less from enlarging the memory.",
"In this paper, we develop prompt tuning for continual learning for the first time.",
"We propose Continual Prompt Tuning , a highly parameter-efficient framework that avoids forgetting and enables for-ward/backward knowledge transfer among tasks.",
"For forward transfer, we explore continual prompt initialization, query fusion, and memory replay techniques.",
"For backward transfer, we devise a memory-guided technique.",
"Extensive experiments on continual learning for DST demonstrate the effectiveness and efficiency of our proposed method compared with state-of-the-art baselines.",
"Our method and findings will foster more future studies towards building more scalable, adaptable task-oriented dialog systems.",
"This work was supported by the National Science Foundation for Distinguished Young Scholars (with No. 62125604) and the NSFC projects (Key project with No. 61936010 and regular project with No. 61876096).",
"This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2019GQG1 and 2020GQG0005, and sponsored by Tsinghua-Toyota Joint Research Fund."
] | [
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"method",
"objective",
"objective",
"method",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"objective",
"abstain",
"objective",
"method",
"other",
"other"
] |
[
"Graphs that capture relations between textual units have great benefits for detecting salient information from multiple documents and generating overall coherent summaries.",
"In this paper, we develop a neural abstractive multidocument summarization (MDS) model which can leverage well-known graph representations of documents such as similarity graph and discourse graph, to more effectively process multiple input documents and produce abstractive summaries.",
"Our model utilizes graphs to encode documents in order to capture cross-document relations, which is crucial to summarizing long documents.",
"Our model can also take advantage of graphs to guide the summary generation process, which is beneficial for generating coherent and concise summaries.",
"Furthermore, pre-trained language models can be easily combined with our model, which further improve the summarization performance significantly.",
"Empirical results on the WikiSum and MultiNews dataset show that the proposed architecture brings substantial improvements over several strong baselines.",
"Multi-document summarization (MDS) brings great challenges to the widely used sequence-to-sequence (Seq2Seq) neural architecture as it requires effective representation of multiple input documents and content organization of long summaries.",
"For MDS, different documents may contain the same content, include additional information, and present complementary or contradictory information (Radev, 2000).",
"So different from single document summarization (SDS), cross-document links are very important in extracting salient information, detecting redundancy and generating overall coherent summaries for MDS.",
"Graphs that capture Corresponding author.",
"relations between textual units have great benefits to MDS, which can help generate more informative, concise and coherent summaries from multiple documents.",
"Moreover, graphs can be easily constructed by representing text spans (e.g. sentences, paragraphs etc.) as graph nodes and the semantic links between them as edges.",
"Graph representations of documents such as similarity graph based on lexical similarities (Erkan and Radev, 2004) and discourse graph based on discourse relations (Christensen et al., 2013), have been widely used in traditional graph-based extractive MDS models.",
"However, they are not well studied by most abstractive approaches, especially the end-to-end neural approaches.",
"Few work has studied the effectiveness of explicit graph representations on neural abstractive MDS.",
"In this paper, we develop a neural abstractive MDS model which can leverage explicit graph representations of documents to more effectively process multiple input documents and distill abstractive summaries.",
"Our model augments the end-to-end neural architecture with the ability to incorporate well-established graphs into both the document representation and summary generation processes.",
"Specifically, a graph-informed attention mechanism is developed to incorporate graphs into the document encoding process, which enables our model to capture richer cross-document relations.",
"Furthermore, graphs are utilized to guide the summary generation process via a hierarchical graph attention mechanism, which takes advantage of the explicit graph structure to help organize the summary content.",
"Benefiting from the graph modeling, our model can extract salient information from long documents and generate coherent summaries more effectively.",
"We experiment with three types of graph representations, including similarity graph, topic graph and discourse graph, which all significantly improve the MDS performance.",
"Additionally, our model is complementary to most pre-trained language models (LMs), like BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019b).",
"They can be easily combined with our model to process much longer inputs.",
"The combined model adopts the advantages of both our graph model and pre-trained LMs.",
"Our experimental results show that our graph model significantly improves the performance of pre-trained LMs on MDS.",
"The contributions of our paper are as follows: Our work demonstrates the effectiveness of graph modeling in neural abstractive MDS.",
"We show that explicit graph representations are beneficial for both document representation and summary generation.",
"We propose an effective method to incorporate explicit graph representations into the neural architecture, and an effective method to combine pre-trained LMs with our graph model to process long inputs more effectively.",
"Our model brings substantial improvements over several strong baselines on both WikiSum and MultiNews dataset.",
"We also report extensive analysis results, demonstrating that graph modeling enables our model process longer inputs with better performance, and graphs with richer relations are more beneficial for MDS.",
"1 2 Related Work 2.1 Graph-based MDS Most previous MDS approaches are extractive, which extract salient textual units from documents based on graph-based representations of sentences.",
"Various ranking methods have been developed to rank textual units based on graphs to select most salient ones for inclusion in the final summary.",
"Erkan and Radev (2004) propose LexRank to compute sentence importance based on a lexical similarity graph of sentences.",
"Mihalcea and Tarau (2004) propose a graph-based ranking model to extract salient sentences from documents.",
"Wan (2008) further proposes to incorporate document-level information and sentence-to-document relations into the graph-based ranking process.",
"A series of variants of the PageRank algorithm has been 1 Codes and results are in: https://github.com/ PaddlePaddle/Research/tree/master/NLP/ACL2020-GraphSum further developed to compute the salience of textual units recursively based on various graph representations of documents (Wan and Xiao, 2009; Cai and Li, 2012).",
"More recently, Yasunaga et al. (2017) propose a neural graph-based model for extractive MDS.",
"An approximate discourse graph is constructed based on discourse markers and entity links.",
"The salience of sentences is estimated using features from graph convolutional networks (Kipf and Welling, 2016).",
"Yin et al. (2019) also propose a graph-based neural sentence ordering model, which utilizes entity linking graph to capture the global dependencies between sentences.",
"Abstractive MDS approaches have met with limited success.",
"Traditional approaches mainly include: sentence fusion-based (Banerjee et al., 2015; Filippova and Strube, 2008; Barzilay and McKe-own, 2005; Barzilay, 2003), information extraction-based (Li, 2015; Pighin et al., 2014; Wang and Cardie, 2013; Genest and Lapalme, 2011; Li and Zhuge, 2019) and paraphrasing-based (Bing et al., 2015; Berg-Kirkpatrick et al., 2011; Cohn and Lapata, 2009).",
"More recently, some researches parse the source text into AMR representation and then generate summary based on it (Liao et al., 2018).",
"Although neural abstractive models have achieved promising results on SDS (See et al., 2017; Paulus et al., 2018; Gehrmann et al., 2018; Celikyilmaz et al., 2018; Li et al., 2018a,b; Narayan et al., 2018; Yang et al., 2019a; Sharma et al., 2019; Perez-Beltrachini et al., 2019), it's not straightforward to extend them to MDS.",
"Due to the lack of sufficient training data, earlier approaches try to simply transfer SDS model to MDS task (Lebanoff et al., 2018; Zhang et al., 2018; Baumel et al., 2018) or utilize unsupervised models relying on recon-struction objectives (Ma et al., 2016; Chu and Liu, 2019).",
"Later, Liu et al. (2018) propose to construct a large scale MDS dataset (namely WikiSum) based on Wikipedia, and develop a Seq2Seq model by considering the multiple input documents as a concatenated flat sequence.",
"Fan et al. (2019) further propose to construct a local knowledge graph from documents and then linearize the graph into a sequence to better sale Seq2Seq models to multidocument inputs.",
"Fabbri et al. (2019) also introduce a middle-scale (about 50K) MDS news dataset (namely MultiNews), and propose an end-to-end model by incorporating traditional MMR-based Hierarchical Graph Attention Add & Normalize Feed Forward Add & Normalize Feed Forward POSITIONAL ENCODING (cid:335) Masked Self-Attention Add & Normalize Graph Decoding Layer token1 END Graph Encoding Layer first paragraph last paragraph Graph-informed Self-Attention Add & Normalize Feed Forward Add & Normalize Feed Forward PARAGRAPH POSITION ENCODINGTOKENPOSITIONENCODING (cid:335) (cid:335) Transformer Transformer (cid:335) (cid:335) Figure 1: Illustration of our model, which follows the encoder-deocder architecture.",
"extractive model with a standard Seq2Seq model.",
"The above Seq2Seq models haven't study the importance of cross-document relations and graph representations in MDS.",
"Most recently, Liu and Lapata (2019a) propose a hierarchical transformer model to utilize the hierarchical structure of documents.",
"They propose to learn cross-document relations based on self-attention mechanism.",
"They also propose to incorporate explicit graph representations into the model by simply replacing the attention weights with a graph matrix, however, it doesn't achieve obvious improvement according to their experiments.",
"Our work is partly inspired by this work, but our approach is quite different from theirs.",
"In contrast to their approach, we incorporate explicit graph representations into the encoding process via a graph-informed attention mechanism.",
"Under the guidance of explicit relations in graphs, our model can learn better and richer cross-document relations, thus achieves significantly better performance.We also leverage the graph structure to guide the summary decoding process, which is beneficial for long summary generation.",
"Additionally, we combine the advantages of pretrained LMs into our model.",
"Pretrained LMs (Peters et al., 2018; Radford et al.; Devlin et al., 2019; Dong et al., 2019; Sun et al., 2019) have recently emerged as a key technology for achieving impressive improvements in a wide variety of natural language tasks, including both language understanding and language generation (Edunov et al., 2019; Rothe et al., 2019).",
"Liu and Lapata (2019b) attempt to incorporate pre-trained BERT encoder into SDS model and achieves significant improvements.",
"Dong et al. (2019) further propose a unified LM for both language understanding and language generation tasks, which achieves state-of-the-art results on several generation tasks including SDS.",
"In this work, we propose an effective method to combine pretrained LMs with our graph model and make them be able to process much longer inputs effectively.",
"In order to process long source documents more effectively, we follow Liu and Lapata (2019a) in splitting source documents into multiple paragraphs by line-breaks.",
"Then the graph representation of documents is constructed over paragraphs.",
"For example, a similarity graph can be built based on cosine similarities between tf-idf representations of paragraphs.",
"Let G denotes a graph representation matrix of the input documents, where G [ i ][ j ] indicates the relation weights between paragraph P i and P j .",
"Formally, the task is to generate the summary S of the document collection given L input paragraphs P 1 , . . . , PL and their graph representation G .",
"Our model is illustrated in Figure 1, which follows the encoder-decoder architecture (Bahdanau et al., 2015).",
"The encoder is composed of several token-level transformer encoding layers and paragraph-level graph encoding layers which can be stacked freely.",
"The transformer encoding layer follows the Transformer architecture introduced in Vaswani et al. (2017), encoding contextual information for tokens within each paragraph.",
"The graph encoding layer extends the Transformer architecture with a graph attention mechanism to incorporate explicit graph representations into the encoding process.",
"Similarly, the decoder is composed of a stack of graph decoding layers.",
"They extend the Transformer with a hierarchical graph attention mechanism to utilize explicit graph structure to guide the summary decoding process.",
"In the following, we will focus on the graph encoding layer and graph decoding layer of our model.",
"As shown in Figure 1, based on the output of the token-level transformer encoding layers, the graph encoding layer is used to encode all documents globally.",
"Most existing neural work only utilizes attention mechanism to learn latent graph representations of documents where the graph edges are attention weights (Liu and Lapata, 2019a; Nicu-lae et al., 2018; Fernandes et al., 2018).",
"However, much work in traditional MDS has shown that explicit graph representations are very beneficial to MDS.",
"Different types of graphs capture different kinds of semantic relations (e.g. lexical relations or discourse relations), which can help the model focus on different facets of the summarization task.",
"In this work, we propose to incorporate explicit graph representations into the neural encoding process via a graph-informed attention mechanism.",
"It takes advantage of the explicit relations in graphs to learn better inter-paragraph relations.",
"Each paragraph can collect information from other related paragraphs to capture global information from the whole input.",
"Graph-informed Self-attention The graph-informed self-attention extends the self-attention mechanism to consider the pairwise relations in explicit graph representations.",
"Let x l 1 i denotes the output of the ( l 1) -th graph encoding layer for paragraph P i , where x 0 i is just the input paragraph vector.",
"For each paragraph P i , the context representation u i can be computed as a weighted sum of linearly transformed paragraph vectors: ij = softmax ( e ij + (cid:60) ij ) e ij =( x l 1 i WQ )( x l 1 j WK ) T d head u i = L (cid:88) j =1 ij ( x l 1 j WV ) (1) where WK , WQ and WV R d d are parameter weights.",
"e tj denotes the latent relation weight between paragraph P i and P j .",
"The main difference of our graph-informed self-attention is the additional pairwise relation bias (cid:60) ij , which is computed as a Gaussian bias of the weights of graph representation matrix G : (cid:60) ij = (1 G [ i ][ j ]) 2 2 2 (2) where denotes the standard deviation that represents the influence intensity of the graph structure.",
"We set it empirically by tuning on the development dataset.",
"The gaussian bias R ij ( inf, 0] measures the tightness between the paragraphs P i and P j .",
"Due to the exponential operation in softmax function, the gaussian bias approximates to multiply the latent attention distribution by a weight (0 , 1] .",
"In our graph-attention mechanism, the term e ij in Equation 1 keeps the ability to model latent dependencies between any two paragraphs, and the term (cid:60) ij incorporates explicit graph representations as prior constraints into the encoding process.",
"This way, our model can learn better and richer inter-paragraph relations to obtain more informative paragraph representations.",
"Then, a two-layer feed-forward network with ReLU activation function and a high-way layer normalization are applied to obtain the vector of each paragraph x li : p li = W o 2 ReLU ( W o 1 ( u i + x l 1 i )) x li = LayerNorm ( p li + x l 1 i ) (3) where W o 1 R d ff d and W o 2 R d d ff are learnable parameters, d ff is the hidden size of the feed-forward layer.",
"Graphs can also contribute to the summary generation process.",
"The relations between textual units can help to generate more coherent or concise summaries.",
"For example, Christensen et al. (2013) propose to leverage an approximate discourse graph to help generate coherent extractive summaries.",
"The discourse relations between sentences are used to help order summary sentences.",
"In this work, we propose to incorporate explicit graph structure into the end-to-end summary decoding process.",
"Graph edges are used to guide the summary generation process via a hierarchical graph attention, which is composed by a global graph attention and a local normalized attention.",
"As other components in the graph decoding layer are similar to the Transformer architecture, we focus on the extension of hierarchical graph attention.",
"Global Graph Attention The global graph attention is developed to capture the paragraph-level context information in the encoder part.",
"Different from the context attention in Transformer, we utilize the explicit graph structure to regularize the attention distributions so that graph representations of documents can be used to guide the summary generation process.",
"Let y l 1 t denotes the output of the ( l 1) -th graph decoding layer for the t -th token in the summary.",
"We assume that each token will align with several related paragraphs and one of them is at the central position.",
"Since the prediction of the central position depends on the corresponding query token, we apply a feed-forward network to transform y l 1 t into a positional hidden state, which is then mapped into a scalar s t by a linear projection: s t = L sigmoid ( U Tp tanh ( W p y l 1 t )) (4) where W p R d d and U p R d denote weight matrix.",
"s t indicates the central position of paragraphs that are mapped by the t -th summary token.",
"With the central position, other paragraphs are determined by the graph structure.",
"Then an attention distribution over all paragraphs under the regularization of the graph structure can be obtained: tj = softmax ( e tj (1 G [ s t ][ j ]) 2 2 2 ) (5) where e tj denotes the attention weight between token vector y l 1 t and paragraph vector x j , which is computed similarly to Equation 1.",
"The global context vector can be obtained as a weighted sum of paragraph vectors: g t = (cid:80) Lj =1 tj x j In our decoder, graphs are also modeled as a Gaussian bias.",
"Different from the encoder, a central mapping position is firstly decided and then graph relations corresponding to that position are used to regularize the attention distributions tj .",
"This way, the relations in graphs are used to help align the information between source input and summary output globally, thus guiding the summary decoding process.",
"Local Normalized Attention Then, a local normalized attention is developed to capture the token-level context information within each paragraph.",
"The local attention is applied to each paragraph independently and normalized by the global graph attention.",
"This way, our model can process longer inputs effectively.",
"Let t,ji denotes the local attention distributions of the t -th summary token over the i -th token in the j -th input paragraph, the normalized attention is computed by: t,ji = t,ji tj (6) and the local context vector can be computed as a weighted sum of token vectors in all paragraphs: l t = (cid:80) Lj =1 (cid:80) nk =1 t,ji x ji Finally, the output of the hierarchical graph attention component is computed by concatenating and linearly transforming the global and local context vector: d t = U Td [ g t , l t ] (7) where U d R 2 d d is a weight matrix.",
"Our model can be easily combined with pre-trained LMs.",
"Pre-trained LMs are mostly based on sequential architectures which are more effective on short text.",
"For example, both BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) are pre-trained with maximum 512 tokens.",
"Liu and Lapata (2019b) propose to utilize BERT on single document summarization tasks.",
"They truncate the input documents to 512 tokens on most tasks.",
"However, thanks to the graph modeling, our model can process much longer inputs.",
"A natural idea is to combine our graph model with pretrained LMs so as to combine the advantages of them.",
"Specifically, the token-level transformer encoding layer of our model can be replaced by a pre-trained LM like BERT.",
"Then they are encoded by a pre-trained LM, and the output vector of the [CLS] token is used as the vector of the corresponding paragraph.",
"Finally, all paragraph vectors are fed into our graph encoder to learn global representations.",
"Our graph decoder is further used to generate the summaries.",
"Graph Representations We experiment with three well-established graph representations: similarity graph, topic graph and discourse graph.",
"The similarity graph is built based on tf-idf cosine similarities between paragraphs to capture lexical relations.",
"The topic graph is built based on LDA topic model (Blei et al., 2003) to capture topic relations between paragraphs.",
"The edge weights are cosine similarities between the topic distributions of the paragraphs.",
"The discourse graph is built to capture discourse relations based on discourse markers (e.g. however, moreover), co-reference and entity links as in Christensen et al. (2013).",
"Other types of graphs can also be used in our model.",
"In our experiments, if not explicitly stated, we use the similarity graph by default as it has been most widely used in previous work.",
"WikiSum Dataset We follow Liu et al. (2018) and Liu and Lapata (2019a) in treating the generation of lead Wikipedia sections as a MDS task.",
"The source documents are reference webpages of the Wikipedia article and top 10 search results returned by Google, while the summary is the Wikipedia article's first section.",
"As the source documents are very long and messy, they are split into multiple paragraphs by line-breaks.",
"Further, the paragraphs are ranked by the title and top ranked paragraphs are selected as input for MDS systems.",
"We directly utilize the ranking results from Liu and Lapata (2019a) and top-40 paragraphs are used as source input.",
"The average length of each paragraph and the target summary are 70.1 tokens and 139.4 tokens, respectively.",
"For the seq2seq baselines, paragraphs are concatenated as a sequence in the ranking order, and lead tokens are used as input.",
"The dataset is split into 1,579,360 instances for training, 38,144 for validation and 38,205 for testing, similar to Liu and Lapata (2019a).",
"We build similarity graph representations over paragraphs on this dataset.",
"MultiNews Dataset Proposed by Fabbri et al. (2019), MultiNews dataset consists of news articles and human-written summaries.",
"The dataset comes from a diverse set of news sources (over 1500 sites).",
"Different from the WikiSum dataset, MultiNews is more similar to the traditional MDS dataset such as DUC, but is much larger in scale.",
"As in Fabbri et al. (2019), the dataset is split into 44,972 instances for training, 5,622 for validation and 5,622 for testing.",
"The average length of source documents and output summaries are 2103.5 tokens and 263.7 tokens, respectively.",
"For the seq2seq baselines, we truncate N input documents to L tokens by taking the first L/N tokens from each source document.",
"Then we concatenate the truncated source documents into a sequence by the original order.",
"Similarly, for our graph model, the input documents are truncated to M paragraphs by taking the first M/N paragraphs from each source document.",
"We build all three types of graph representations on this dataset to explore the influence of graph types on MDS.",
"Training Configuration We train all models with maximum likelihood estimation, and use label smoothing (Szegedy et al., 2016) with smoothing factor 0.1.",
"The optimizer is Adam (Kingma and Ba, 2015) with learning rate 2, 1 =0.9 and 2 =0.998.",
"We also apply learning rate warmup over the first 8,000 steps and decay as in (Vaswani et al., 2017).",
"Gradient clipping with maximum gradient norm 2.0 is also utilized during training.",
"All models are trained on 4 GPUs (Tesla V100) for 500,000 steps with gradient accumulation every four steps.",
"We apply dropout with probability 0.1 before all linear layers in our models.",
"The number of hidden units in our models is set as 256, the feed-forward hidden size is 1,024, and the number of heads is 8.",
"The number of transformer encoding layers, graph encoding layers and graph decoding layers are set as 6, 2 and 8, respectively.",
"The parameter is set as 2.0 after tuning on the validation dataset.",
"During decoding, we use beam search with beam size 5 and length penalty with factor 0.6.",
"Trigram blocking is used to reduce repetitions.",
"For the models with pretrained LMs, we apply different optimizers for the pretrained part and other parts as in (Liu and Lapata, 2019b).",
"Two Adam optimizers with 1 =0.9 and 2 =0.999 are used for the pretrained part and other parts, respectively.",
"The learning rate and warmup steps for the pretrained part are set as 0.002 and 20000, while 0.2 and 10000 for other parts.",
"Other model configurations are in line with the corresponding pretrained LMs.",
"We choose the base version of BERT, RoBERTa and XLNet in our experiments.",
"We evaluate our models on both the WikiSum and MultiNews datasets to validate the efficiency of them on different types of corpora.",
"The summa-Model R-1 R-2 R-L Lead 38.22 16.85 26.89 LexRank 36.12 11.67 22.52 FT 40.56 25.35 34.73 BERT+FT 41.49 25.73 35.59 XLNet+FT 40.85 25.29 35.20 RoBERTa+FT 42.05 27.00 36.56 T-DMCA 40.77 25.60 34.90 HT 41.53 26.52 35.76 GraphSum 42.63 27.70 36.97 GraphSum+RoBERTa 42.99 27.83 37.36 Table 1: Evaluation results on the WikiSum test set using ROUGE F 1 .",
"rization quality is evaluated using ROUGE F 1 (Lin and Och, 2004).",
"We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) between system summaries and gold references as a means of assessing informativeness, and the longest common subsequence (ROUGE-L 2 ) as a means of accessing fluency.",
"Results on WikiSum Table 6 summarizes the evaluation results on the WikiSum dataset.",
"Several strong extractive baselines and abstractive baselines are also evaluated and compared with our models.",
"The first block in the table shows the results of extractive methods Lead and LexRank (Erkan and Radev, 2004).",
"The second block shows the results of abstractive methods: (1) FT (Flat Transformer), a transformer-based encoder-decoder model on a flat token sequence; (2) T-DMCA, the best performing model of Liu et al. (2018); (3) HT (Hierarchical Transformer), a model with hierarchical transformer encoder and flat transformer decoder, proposed by Liu and Lapata (2019a).",
"We report their results following Liu and Lapata (2019a).",
"The last block shows the results of our models, which are feed with 30 paragraphs (about 2400 tokens) as input.",
"The results show that all abstractive models outperform the extractive ones.",
"Compared with FT, T-DMCA and HT, our model GraphSum achieves significant improvements on all three metrics, which demonstrates the effectiveness of our model.",
"Furthermore, we develop several strong base-2 For fair comparison with previous work (Liu and Lapata, 2019a; Liu et al., 2018), we report the summary-level ROUGEL results on both the two datasets.",
"The sentence-level ROUGE-L results are reported in the Appendix.",
"lines which combine the Flat Transformer with pre-trained LMs.",
"We replace the encoder of FT by the base versions of pre-trained LMs, including BERT+FT, XLNet+FT and RoBERTa+FT.",
"For them, the source input is truncated to 512 tokens 3 .",
"The results show that the pre-trained LMs significantly improve the summarization performance.",
"As RoBERTa boosts the summarization performance most significantly, we also combine it with our GraphSum model, namely GraphSum+RoBERTa 4 .",
"The results show that GraphSum+RoBERTa further improves the summarization performance on all metrics, demonstrating that our graph model can be effectively combined with pre-trained LMs.",
"The significant improvements over RoBERTa+FT also demonstrate the effectiveness of our graph modeling even with pre-trained LMs.",
"Results on MultiNews Table 7 summarizes the evaluation results on the MultiNews dataset.",
"Similarly, the first block shows two popular extractive baselines, and the second block shows several strong abstractive baselines.",
"We report the results of Lead, LexRank, PG-BRNN, HiMAP and FT following Fabbri et al. (2019).",
"The last block shows the results of our models.",
"The results show that our model GraphSum consistently outperforms all baselines, which further demonstrate the effectiveness of our model on different types of corpora.",
"We also compare the performance of RoBERTa+FT and GraphSum+RoBERTa, which show that our model significantly improves all metrics.",
"3 Longer inputs don't achieve obvious improvements.",
"4 As XLNet and BERT achieve worse results than RoBERTa, we only report the results of GraphSum+RoBERTa Len Model R-1 R-2 R-L 500 HT 41.08 25.83 35.25 GraphSum 41.55 26.24 35.59 +0.47 +0.41 +0.34 800 HT 41.41 26.46 35.79 GraphSum 41.70 26.87 36.10 +0.29 +0.41 +0.31 1600 HT 41.53 26.52 35.76 GraphSum 42.48 27.52 36.66 +0.95 +1.00 +0.90 2400 HT 41.68 26.53 35.73 GraphSum 42.63 27.70 36.97 +0.95 +1.17 +1.24 3000 HT 41.71 26.58 35.81 GraphSum 42.36 27.47 36.65 +0.65 +0.89 +0.84 Table 3: Comparison of different input length on the WikiSum test set using ROUGE F 1 .",
"The above evaluation results on both WikiSum and MultiNews dataset both validate the effectiveness of our model.",
"The proposed method to modeling graph in end-to-end neural model greatly improves the performance of MDS.",
"We further analyze the effects of graph types and input length on our model, and validate the effectiveness of different components of our model by ablation studies.",
"Effects of Graph Types To study the effects of graph types, the results of GraphSum+RoBERTa with similarity graph, topic graph and discourse graph are compared on the MultiNews test set.",
"The last block in Table 7 summarizes the comparison results, which show that the topic graph achieves better performance than similarity graph on ROUGE-1 and ROUGE-2, and the discourse graph achieves the best performance on ROUGE-2 and ROUGE-L.",
"The results demonstrate that graphs with richer relations are more helpful to MDS.",
"Effects of Input Length Different lengths of input may affect the summarization performance seriously for Seq2Seq models, so most of them restrict the length of input and only feed the model with hundreds of lead tokens.",
"As stated by Liu and Lapata (2019a), the FT model achieves the best performance when the input length is set to 800 Model Rouge-1 Rouge-2 Rouge-L GraphSum 42.63 27.70 36.97 w/o graph dec 42.06 27.13 36.33 w/o graph enc 40.61 25.90 35.26 Table 4: Ablation study on the WikiSum test set.",
"tokens, while longer input hurts performance.",
"To explore the effectiveness of our GraphSum model on different length of input, we compare it with HT on 500, 800, 1600, 2400 and 3000 tokens of input respectively.",
"Table 3 summarizes the comparison results, which show that our model outperforms HT on all length of input.",
"More importantly, the advantages of our model on all three metrics tend to become larger as the input becomes longer.",
"The results demonstrate that modeling graph in the end-to-end model enables our model process much longer inputs with better performance.",
"Ablation Study Table 4 summarizes the results of ablation studies aiming to validate the effectiveness of individual components.",
"Our experiments confirmed that incorporating well-known graphs into the encoding process by our graph encoder (see w/o graph enc) and utilizing graphs to guide the summary decoding process by our graph decoder (w/o graph dec) are both beneficial for MDS.",
"In addition to the automatic evaluation, we also access system performance by human evaluation.",
"We randomly select 50 test instances from the WikiSum test set and 50 from the MultiNews test set, and invite 3 annotators to access the outputs of different models independently.",
"Annotators access the overall quality of summaries by ranking them taking into account the following criteria: (1) Informativeness : does the summary convey important facts of the input?",
"(2) Fluency : is the summary fluent and grammatical?",
"(3) Succinctness : does the summary avoid repeating information?",
"Annotators are asked to ranking all systems from 1(best) to 5 (worst).",
"Ranking could be the same for different systems if they have similar quality.",
"For example, the ranking of five systems could be 1, 2, 2, 4, 5 or 1, 2, 3, 3, 3.",
"All systems get score 2, 1, 0, -1, -2 for ranking 1, 2, 3, 4, 5 respectively.",
"The rating of each system is computed by averaging the scores on all test instances.",
"Table 5 summarizes the comparison results of five systems.",
"Both the percentage of ranking results Model 1 2 3 4 5 Rating FT 0.18 0.21 0.23 0.16 0.22 -0.03 R.B.+FT 0.32 0.22 0.17 0.19 0.10 0.49 HT 0.21 0.32 0.12 0.15 0.20 0.19 GraphSum 0.42 0.30 0.17 0.10 0.01 1.02 G.S.+R.B. 0.54 0.24 0.10 0.08 0.04 1.16 Table 5: Ranking results of system summaries by human evaluation.",
"and overall ratings are reported.",
"The results demonstrate that GraphSum and GraphSum+RoBERTa are able to generate higher quality summaries than other models.",
"Specifically, the summaries generated by GraphSum and GraphSum+RoBERTa usually contains more salient information, and are more fluent and concise than other models.",
"The human evaluation results further validates the effectiveness of our proposed models.",
"In this paper we explore the importance of graph representations in MDS and propose to leverage graphs to improve the performance of neural abstractive MDS.",
"Our proposed model is able to incorporate explicit graph representations into the document encoding process to capture richer relations within long inputs, and utilize explicit graph structure to guide the summary decoding process to generate more informative, fluent and concise summaries.",
"We also propose an effective method to combine our model with pre-trained LMs, which further improves the performance of MDS significantly.",
"Experimental results show that our model outperforms several strong baselines by a wide margin.",
"In the future we would like to explore other more informative graph representations such as knowledge graphs, and apply them to further improve the summary quality.",
"This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900)."
] | [
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"method",
"result",
"method",
"method",
"method",
"result",
"objective",
"result",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"objective",
"other"
] |
[
"Existing datasets for regular expression (regex) generation from natural language are limited in complexity; compared to regex tasks that users post on StackOverflow, the regexes in these datasets are simple, and the language used to describe them is not diverse.",
"We introduce STRUCTUREDREGEX , a new regex synthesis dataset differing from prior ones in three aspects.",
"First, to obtain structurally complex and realistic regexes, we generate the regexes using a probabilistic grammar with pre-defined macros observed from real-world StackOverflow posts.",
"Second, to obtain linguistically diverse natural language descriptions, we show crowdworkers abstract depictions of the underlying regex and ask them to describe the pattern they see, rather than having them paraphrase synthetic language.",
"Third, we augment each regex example with a collection of strings that are and are not matched by the ground truth regex, similar to how real users give examples.",
"Our quantitative and qualitative analysis demonstrates the advantages of STRUCTUREDREGEX over prior datasets.",
"Further experimental results using various multimodal synthesis techniques highlight the challenge presented by our dataset, including non-local constraints and multi-modal inputs.",
"1 1 Introduction Regular expressions (regexes) are known for their usefulness and wide applicability, and yet they are hard to understand and write, even for many programmers (Friedl, 2006).",
"Recent research has therefore studied how to construct regexes from natural language (NL) descriptions, leading to the emergence of NL-to-regex datasets including 1 Code and data available at https://www.cs.",
"KB13 (Kushman and Barzilay, 2013) and NL-TURK (Locascio et al., 2016).",
"However, KB13 is small in size, with only 814 NL-regex pairs with even fewer distinct regexes.",
"Locascio et al. (2016) subsequently employed a generate-and-paraphrase procedure (Wang et al., 2015) to create the larger NL-TURK dataset.",
"However, the regexes in this dataset are very simple, and the descriptions are short, formulaic, and not linguistically diverse because of the paraphrasing annotation procedure (Herzig and Berant, 2019).",
"As a result, even when models achieve credible performance on these datasets, they completely fail when evaluated on the STACKOVERFLOW dataset (Ye et al., 2019), a real-world dataset collected from users seeking help on StackOverflow.",
"The limited size of this dataset (only 62 NL-regex pairs) makes it",
"unsuitable for large-scale training, and critically, the complexity of regexes it features means that regex synthesis systems must leverage the user-provided positive and negative examples (strings that should be matched or rejected by the target regex) in order to do well.",
"To enable the development of large-scale neural models in this more realistic regex setting, we present STRUCTUREDREGEX , a new dataset of English language descriptions and pos-itive/negative examples associated with complex regexes.",
"Using a new data collection procedure (Figure 1), our dataset addresses two major limitations in NL-TURK .",
"First, we generate our regexes using a structured probabilistic grammar which includes macro rules defining high-level templates and constructions that involve multiple basic operators.",
"These grammar structures allow us to sample more realistic regexes, with more terminals and operators, while avoiding vacuous regexes.",
"By contrast, the random sampling procedure in NL-TURK leads to simple regexes, and attempting to sample more complex regexes results in atypical regex structures or even contradictory regexes that do not match any string values (Ye et al., 2019).",
"Second, to achieve more realistic language descriptions, we prompt Turkers to write descriptions based on abstract figures that show the desired regexes.",
"We design a set of visual symbols and glyphs to draw a given regex with minimal textual hints.",
"We thereby avoid priming Turkers to a particular way of describing things, hence yielding more linguistically diverse descriptions.",
"Using this methodology, we collect a total of 3,520 English descriptions, paired with ground truth regexes and associated positive/negative examples.",
"We conduct a comprehensive analysis and demonstrate several linguistic features present in our dataset which do not occur in past datasets.",
"We evaluate a set of baselines, including grammar-based methods and neural models, on our dataset.",
"In addition, we propose a novel decoding algorithm that integrates constrained decoding using positive/negative examples during inference: this demonstrates the potential of our dataset to enable work at the intersection of NLP and program synthesis.",
"The performance of the best existing approach on STRUCTUREDREGEX only reaches 37%, which is far behind 84% on NL-TURK .",
"However, this simple model can nevertheless solve 13% of the STACKOVERFLOW dataset, indicating that further progress on this dataset can be useful for real-world scenarios.",
"We first describe the structured generative process we adopt to produce the regexes in our dataset.",
"For better readability, we denote regexes using a domain specific language (DSL) similar to regex DSLs in prior work (Locascio et al., 2016; Ye et al., 2019).",
"Our DSL has the same expressiveness as a standard regular language and can be easily mapped back to standard regular expressions.",
"2 To collect the NL-TURK dataset, Locascio et al. (2016) sampled regexes using a hand-crafted grammar similar to a standard regex DSL.",
"However, regexes sampled from this process can easily have conflicts (e.g. and(<let>,<num>) ) or redundancies (e.g. or(<let>,<low>) ).",
"One solution to this problem is rejection sampling, but this still does not yield regexes with compositional, real-world structure.",
"We show three prominent types of composition observed from STACKOVERFLOW in Figure 2.",
"Each regex above is built by assembling several sub-regexes together according to a high-level template: regex",
"(a) is the intersection of two base regexes expressing constraints, regex",
"(b) is a sequence of three simple parts, and regex",
"(c) is a 2 Refer to the appendix for details of our DSL.",
"and(repatleast(or(<num>,<let>),1),and(startwith(<num>),not(startwith(<0>))))",
"list of three segments delimited by a constant.",
"We observe that these three templates actually capture a wide range of possible regex settings.",
"The first, for example, handles password validation-esque settings where we have a series of constraints to apply to a single string.",
"The second and third reflect matching sequences of fields, which may have shared structured (regex",
"(c)) or be more or less independent (regex",
"(b)).",
"To generate realistic regexes in these forms, we rely on a structured hand-crafted grammar.",
"The top level of our grammar specifies three templates distilled from STACKOVERFLOW examples: INTERSECTION , CONCATENATION , and SEPARATION , which mimic patterns of real-world regexes.",
"In Figure 3, we show how regexes in Figure 2 can be derived from our templates.",
"The INTERSECTION template (left) intersects several base constraints with the and operator; the CONCATENATION template (middle) concatenates several base components with the concat operator.",
"SEPARATION (right) is a more complex type, generating a list of constant-separated INTERSECTION or CONCATENATION regexes which may be identical or share common components.",
"Across all templates, the components are sub-regexes falling into a few high-level types (no-tably Cons and Comp ), which are depth-limited to control the overall complexity (discussed in Appendix B.2).",
"To make these component regexes more realistic as well, we design several macro rules that expand to more than one operator.",
"The macros are also extracted from real-world examples and capture complex relations like adversative (Figure 4) and conditional (Table 2) relations.",
"Although our hand-crafted grammar does not cover every possible construction allowed by the regular expression language, it is still highly expressive.",
"Based on manual analysis, our grammar covers 80% of the real-world regexes in STACKOVERFLOW , whereas the grammar of NL-TURK only covers 24% (see Section 4).",
"Note that some constructions apparently omitted by our grammar are equivalent to ones supported by our grammar: e.g., we don't allow a global startwith constraint in the CONCATENATION template, but this constraint can be expressed by having the first component of the concatenation incorporate the desired constraint.",
"Although our structural constraints on the grammar already give rise to more realistic regexes, we still want to impose further control over the generative process to mimic properties of real-world regexes.",
"For example, there are sometimes repeating components in CONCATENATION regexes, such as regex",
"(b) from Figure 2.",
"We encourage such regexes by dynamically modifying the probability of applying the grammar rules while we are expanding a regex based on the status of the entire tree that has currently been induced.",
"For example, suppose we are building regex",
"(b) from Figure 2, and suppose we currently have concat(reprange(<num>, 1,2),concat(<.>, Comp )) , where Comp is a non-terminal that needs to be expanded into a sub-regex.",
"Because we already have reprrange(<num>,1,2) and",
"<.> in the current tree, we increase the probability of expanding Comp to generate these particular two sub-regexes, allowing the model to copy from what it has generated before.",
"3 In addition to copying, we also change the sampling distribution when sampling children of certain grammar constructs to control for complexity and encourage sampling of valid regexes.",
"For example, the child of a startwith expression will typically be less complex and compositional than the child of a Comp expression, so we tune the probabilities of sampling compositional AST operators like or appropriately.",
"The STACKOVERFLOW dataset (Ye et al., 2019) shows that programmers often provide both positive and negative examples to fully convey their intents while specifying a complicated regex.",
"Therefore, we augment our dataset with positive and negative examples for each regex.",
"Our model will use these examples to resolve ambiguity present in the natural language descriptions.",
"However, the examples can also help Turkers to better understand the regexes they are describing during the data collection process.",
"3 This component reuse bears some similarity to an Adaptor Grammar (Johnson et al., 2007).",
"However, we modify the distributions in a way that violates exchangeability, making it not formally equivalent to one.",
"positive: negative: A1234 negative: a123 concat(<low>,repatleast(<num>,4)) concat(<cap>,repatleast(<num>,4)) concat(<low>,rep(<num>,3)) perturb perturb a1234 b5678 Figure 5: The process of generating distinguishing negative examples by minorly perturbing each of the sub-regexes in the ground truth regex.",
"We aim to generate diverse and distinguishing examples similar to human-written ones, which often include corner cases that differentiate the ground truth regex from closely-related spurious ones.",
"We can achieve this by enumerating examples that cover the states in the deterministic finite automaton (DFA) defined by the given regex 4 and reject similar but incorrect regexes.",
"We employ the Automaton Library (Mller, 2017) to generate the examples in our work.",
"Positive examples are generated by stochastically traversing the DFA.",
"For negative examples, randomly sampling examples from the negation of a given regex will typically produce obviously wrong examples and not distinguishing negative examples as desired.",
"Therefore, we propose an alternative approach shown in Figure 5 for generating negative examples.",
"We apply minor perturbations to the ground truth regex to cause it to accept a set of strings that do not intersect with the set recognized by the original regex.",
"The negative examples can be derived by sampling a positive string from one of these incorrect regexes.",
"For each regex in our dataset, we generate 6 positive examples and 6 negative examples.",
"These numbers are comparable to the average number of examples provided by STACKOVERFLOW users.",
"As stated previously, we avoid the paradigm of asking users to paraphrase machine-generated regex descriptions, as this methodology can yield formulaic and artificial descriptions.",
"Instead, we ask users to describe regexes based on figures that illustrate how the regex is built.",
"We show one example figure of a SEPARATION regex in Figure 6.",
"In general, we abstract a given regex as a series of blocks linked with textual descriptions of its content and constraints.",
"For instance, startwith and endwith are denoted by shading the head or tail of a block.",
"By linking multiple blocks to shared tex-4 Recall that although our DSL is tree-structured, it is equivalent in power standard regexes, and hence our expressions can be mapped to DFAs.",
"tual descriptions, we hope to encourage Turkers to notice the correlation and write descriptions accordingly.",
"Finally, we have different textual hints for the same concept: contain x in Figure 6 may appear as have x elsewhere.",
"These figures are rendered for each regex in the MTurk interface using JavaScript.",
"Task We collected the STRUCTUREDREGEX dataset on Amazon Mechanical Turk (MTurk).",
"For each HIT, the Turkers are presented with a regex figure and a set of positive/negative examples.",
"Then, they are asked to write down several sentences describing the regex, as well as one additional positive example that matches the regex.",
"We only accept a description if the submitted positive example is matched by the ground-truth regex; this helps filter out some cases where the Turker may have misunderstood the regex.",
"We show an example HIT in Appendix C. In early pilot studies, we explored other ways of abstractly explaining regexes to Turkers, such as providing more examples and an associated set of keywords, yet none of these methods led to users generating sufficiently precise descriptions.",
"By contrast, our figures fully specify the semantics of the regexes while only minimally biasing Turkers towards certain ways of describing them.",
"We generated 1,200 regexes (400 from each template), assigned each regex to three Turkers, and collected a total of 3,520 descriptions after rejecting HITs.",
"In general, each Turker spent 2 to 3 minutes on each of the HITs, and we set the reward to be $0.35.",
"The total cost of collecting our dataset was $1,512, and the average cost for each description is $0.43.",
"Quality To ensure the quality of collected responses, we require the Turkers to first take a qualification test which simply requires describing one regex that we have specified in advance.",
"We then check that the description for this regex is sufficiently long and that it contains enough of our manually-written correct base regex concepts.",
"We manually observed from the responses that various styles were adopted by different Turkers for describing the same type of regexes.",
"For instance, given regex",
"(b) in Figure 2, some Turkers tend to enumerate every component in order, describing it as one or two digits followed by a dot followed by one or two digits ; some other Turkers prefer grouping identical components and describing the components out of order, describing it as the first and third parts are one or two digits, and the second part is a dot .",
"These distinct styles lead to a diversity of linguistic phenomena, which is further analyzed in Section 4.",
"Because we aim for high linguistic diversity in our dataset, we prohibited a single Turker from doing more than 300 HITs.",
"Furthermore, we found anecdotal evidence that the task was engaging for users, which we took as a positive signal for generation quality.",
"We received messages about our HITs from some Turkers telling us that our HIT was really interesting and they enjoyed doing it.",
"Splitting the Dataset Since our dataset consists of natural language descriptions written by annotators, there is possibly bias introduced by training and testing on the same annotators (Geva et al., 2019).",
"Therefore, in addition to the standard Train/Development/Test splits, we also form a Test-E (excluded) which consists only of annotations from annotators unseen in the training set.",
"We ensure that Train, Dev, and both two test sets (Test and Test-E) have mutually exclusive regexes from each other (Test and Test-E can have common regexes), and Test-E is annotated entirely by TURKSTREG Example NL from STREG multi-sentence 0% 70% The string has 6 or more characters .",
"a disjoint set of annotators from those who annotated the training or development set.",
"The final size of the splits are: 2173 (61.7%), 351 (10.0%), 629 (17.9%), 367 (10.4%).",
"We demonstrate the advantages of our dataset over prior datasets (Kushman and Barzilay, 2013; Locascio et al., 2016) through both quantitative and qualitative analysis.",
"We list the key statistics of our dataset as well as KB13 and NL-TURK for comparison in Table 1.",
"Compared to past synthetic datasets, our dataset has more diverse and sophisticated language.",
"The average NL length of our dataset is twice as long as that of NL-TURK , and the descriptions contain many more unique words even though our dataset contains fewer regexes.",
"In addition, our dataset contains more complex regexes that are closer to the complexity of real-world regexes found on StackOverflow, whereas regexes in previous datasets are significantly simpler.",
"Manual Analysis We further manually analyze 150 descriptions from past synthetic datasets and our dataset.",
"Table 2 lists the proportion of descriptions containing each of several phenomena: examples that are multi-sentence , examples with clear syntactic or semantic ambiguity , examples using abstraction to refer to different parts of the regex, examples invoking non-local constraints , and examples with nontrivial coreference .",
"The language from our dataset is organic and diverse, since we allow Turkers to compose their own descriptions.",
"We find that macros and complex constraints in our structured grammar can successfully trigger interesting language.",
"For instance, the abstraction reflects repetition in concatenation regexes, and the bottom part of Table 2 reflects the KB13 TURKSTREG Word Coverage 27.1% 34.4% 55.9% Regex Coverage 23.5% 23.5% 84.3% Table 3: Distribution mismatch analysis with respect to STACKOVERFLOW on past datasets and our dataset.",
"Furthermore, the complex and ambiguous language highlights the necessity of including examples together with language to fully specify a regex.",
"For instance, ambiguity is common in our descriptions.",
"However, many of the ambiguous descriptions can be resolved with the help of examples.",
"Concretely, the description for ambiguity from Table 2 can be easily interpreted as startwith(concat(<let>, repeat(<num>,2))) while the ground truth is concat(<let>,repeat(<num>,2)) .",
"By simply adding one negative example, a123, the ground truth can be distinguished from the spurious regex.",
"Comparison to STACKOVERFLOW Since our goal was to produce realistic regex data, we analyze how well the real-world STACKOVERFLOW dataset is covered by data from STRUCTUREDREGEX compared to other datasets (Kush-man and Barzilay, 2013; Locascio et al., 2016).",
"We ignore 11 of the STACKOVERFLOW examples that involve the high-level decimal concept, which is beyond the scope of our dataset and past synthetic datasets.",
"In addition, we anonymize all the constants and integer parameters (e.g., repeat(<x>,9) is anonymized as repeat(const,int) ).",
"The statistics (Table 3) suggest that our dataset is more highly similar to real-world regexes on StackOverflow, especially in terms of regex distribution.",
"We evaluate the accuracy of both existing grammar-based approaches and neural models, as well as a novel method that targets the multimodal nature of our dataset.",
"Existing Approaches SEMANTIC-UNIFY (Kushman and Barzilay, 2013) is a grammar-based approach that relies on a probabilistic combinatory categorical grammar to build the regexes.",
"DEEPREGEX (Locascio et al., 2016) directly translates natural language descriptions into regexes using a seq-to-seq model enhanced with attention (Luong et al., 2015) without considering examples.",
"We re-implemented DEEPREGEX with slightly different hyperparameters; we refer to our re-implementation as DEEPREGEX (OURS ).",
"DEEPREGEX +F ILTER (Ye et al., 2019) adapts DEEPREGEX so as to take examples into account by simply filtering the k -best regexes based on whether a regex accepts all the positive examples and rejects all the negative ones.",
"Example-Guided Decoding Although DEEPREGEX +F ILTER is able to take advantage of positive and negative string examples, these examples are completely isolated in the training and inference phase.",
"We propose to make use of examples during inference with the technique of over-and underapproximation (Lee et al., 2016) used in the program synthesis domain.",
"The core idea of our approach is that, for each partially completed regex during decoding, we use the approximation technique to infer whether the regex can possibly match all positive or reject all negative examples.",
"If this is impossible, we can prune this partial regex from our search.",
"This approach allows us to more effectively explore the set of plausible regexes without increasing the computational budget or beam size.",
"As an example, consider the ground truth regex and(startwith(<low>),endwith(<num>)) with one corresponding positive example 00x.",
"Suppose that the decoder has so far generated the incomplete regex and(startwith(<cap>), .",
"To produce a syntactically valid regex, the decoder needs to generate a second argument for the and .",
"By appending star(<any>) as its second argument, we can see that there is no completion here that will accept the given positive example, allowing us to reject this regex from the beam.",
"Under-approximation works analogously, Approach KB13 TURKSTREGSEMANTIC-UNIFY 65.5% 38.6% 1.8% DEEPREGEX (Locascio et al.) 65.6% 58.2% DEEPREGEX (Ours) 66.5% 60.2% 24.5% DEEPREGEX + FILTER 77.7% 83.8% 37.2% Table 4: DFA-equivalent accuracy on prior datasets and our dataset.",
"completing regexes with maximally restrictive arguments and checking that negative examples are rejected.",
"We integrate the aforementioned technique in the beam decoding process by simply pruning out bad partial derivations at each timestep.",
"We refer to this approach as DEEPREGEX + APPROX .",
"We evaluate the baseline models on KB13, NL-TURK , and our dataset (Table 4).",
"The results show that our dataset is far more challenging compared to existing datasets.",
"Traditional grammar baseline can scarcely solve our dataset.",
"The best baseline, DEEPREGEX + FILTER , achieves more than 77.7% on KB13 and 83.8% NL-TURK when these datasets are augmented with examples, but can only tackle 37.2% of our dataset.",
"Additionally, the comparison between DEEPREGEX and DEEPREGEX + FILTER demonstrates that simply filtering the outputs of neural model leads to a substantial performance boost on all the datasets.",
"This supports the effectiveness of the way we specify regexes, i.e., using both natural language descriptions and examples.",
"Table 5 shows the detailed accuracy regarding different regex templates on both Test and Test-E sets.",
"Our DEEPREGEX + APPROX achieves best accuracy with 5.6% and 7.9% improvement over DEEPREGEX + FILTER on Test and Test-E, respectively, since it can leverage examples more effectively using overand underapproximations during search.",
"Accuracy varies on different types of regexes.",
"Generally, models perform the best on concatenation regexes, slightly worse on intersection regexes, and the worst on separation regexes.",
"Concatenation regexes usually have straightforward Approach Test Test-E Agg Int Cat Sep Agg Int Cat Sep SEMANTIC-UNIFY 2.1% 2.9% 3.1% 0.0% 1.4% 1.6% 2.4% 0.0% DEEPREGEX (Ours) 27.8% 20.7% 42.2% 19.2% 18.8% 18.0% 23.6% 14.8% DEEPREGEX + FILTER 42.6% 38.9% 55.2% 32.3% 28.1% 32.0% 32.5% 19.7% DEEPREGEX + APPROX 48.2% 45.7% 59.6% 37.9% 36.0% 39.3% 40.7% 27.9% Table 5: Results for models trained and tested on STRUCTUREDREGEX .",
"descriptions in the form of listing simple components one by one.",
"Intersection descriptions can be more complicated because of the high-level macros specified by our grammar.",
"Separation descriptions are the most complex ones that often involve coreferences and non-local features.",
"Performance on Test-E is 12% lower than on Test for the models haven't been trained on patterns of the unseen annotators.",
"Finally, we investigate whether a model trained on our dataset can transfer to the STACKOVERFLOW dataset.",
"As in Section 4, we ignore instances requiring the decimal concept and only evaluate on the subset of STACKOVERFLOW with 51 instances.",
"We compare our dataset with NL-TURK for this task.",
"As shown in Table 6, DEEPREGEX trained on NL-TURK completely fails on STACKOVERFLOW and even fails to predict reasonable regexes that are consistent with the examples.",
"This is caused by the fact that the NL-TURK dataset contains formulaic descriptions and shallow regexes that are not representative of real-world tasks.",
"DEEPREGEX trained on our dataset can at least achieve 9.8% accuracy on STACKOVERFLOW dataset because the English descriptions in this dataset better match the desired task.",
"Our DEEPREGEX + APPROX model successfully solves 13.7% and finds consistent regexes for 38% of the tasks, which is credible given that the performance of the same model on Test-E set is only 30%.",
"Some additional challenges in STACKOVERFLOW are instances involving large numbers of constants or slightly more formal language since the SO users are mainly programmers.",
"However, we believe the transfer results here show that improved performance on our dataset may transfer to STACKOVERFLOW as well, since some of the challenges also present in our Test-E set (e.g., unseen language).",
"It is difficult to hire Turkers to estimate a human performance upper bound, because our task requires reckoning with both the descriptions and positive/negative examples.",
"Unlike many NLP tasks where an example with ambiguous language is fundamentally impossible, here the examples may actually still allow a human to determine the correct answer with enough sleuthing.",
"But to perform this task, crowdworkers would minimally need to be trained to understand the DSL constructs and how they compose, which would require an extensive tutorial and qualification test.",
"To do the task well, Turkers would need a tool to do on-the-fly execution of their proposed regexes on the provided examples.",
"We instead opted for a lighter-weight verifica-tion approach to estimate human performance.",
"We adopted a post-editing approach on failure cases from our model, where we compared the model's output with the input description and examples and corrected inconsistencies.",
"Specifically, we sample 100 failure examples from the test set (Test plus Test-E) and manually assess the failure cases.",
"We find 78% of failure cases contain descriptions that describe all components of the target regexes, but our seq-to-seq models are insufficient to capture these.",
"There are truly some misor under-specified examples, such as not mentioning the optionality of one component or mistaking I for l in constants.",
"An additional 9% (out of 100) of the errors could be fixed using the provided examples.",
"This leaves roughly 13% of failure cases that are challenging to solve.",
"Considering that the model already achieves 43.6% accuracy on the test set, we estimate human performance is around 90%.",
"5 7 Related Work Data collection in semantic parsing Collecting large-scale data for semantic parsing and related tasks is a long-standing challenge (Berant et al., 2013; Wang et al., 2015).",
"Wang et al. (2015) proposed the generate-and-paraphrase framework, which has been adopted to collect datasets in various domains (Locascio et al., 2016; Ravichander et al., 2017; Johnson et al., 2017).",
"However, this process often biases annotators towards using formulaic language (Ravichander et al., 2017; Herzig and Berant, 2019).",
"Similar to our work, past work has sought to elicit linguistically diverse data using visual elements for semantic parsing (Long et al., 2016), natural language generation (Novikova et al., 2016), and visual reasoning (Suhr et al., 2017, 2019).",
"However, for these other tasks, the images used are depictions of an inherently graphical underlying world state; e.g., the NLVR dataset (Suhr et al., 2017) and NLVR2 (Suhr et al., 2019) are based on reasoning over the presented images, and the Tangrams dataset (Long et al., 2016) involves describing shape transformations.",
"By contrast, regexes are typically represented as source code; there is no standard graphical schema for depicting the patterns they recognize.",
"This changes the properties of the generated descriptions, leading to higher levels of compositionality and ambiguity because what's being described is not naturally an image.",
"has tackled the problem of program synthesis from examples (Gulwani, 2011; Gulwani and Jain,",
"5 In addition, the first author manually wrote regexes for 100 randomly sampled examples and achieved an accuracy of 95% (higher than the estimate).",
"However, the author also has a strong prior over what synthetic regexes are likely to be in the data.",
"2017; Alur et al., 2013; Wang et al., 2016; Feng et al., 2018; Devlin et al., 2017; Nye et al., 2019).",
"A closer line of work to ours uses both examples and natural language input (Yaghmazadeh et al., 2017; Ye et al., 2019; Andreas et al., 2018), which involves fundamentally different techniques.",
"However, our work does not rely on the same sort of program synthesizer to build final outputs (Yaghmazadeh et al., 2017; Ye et al., 2019).",
"Moreover, Andreas et al. (2018) only use language at train time, whereas we use NL at both train and test time.",
"Finally, while several datasets on regex synthesis specifically have been released (Kushman and Barzilay, 2013; Locascio et al., 2016), we are the first to incorporate examples in the dataset.",
"Other methods have been proposed to parse natural language into regex via rule-based (Ranta, 1998), grammar-based (Kushman and Barzilay, 2013), or neural models (Locascio et al., 2016; Zhong et al., 2018; Ye et al., 2019).",
"Notably, Zhong et al. (2018) also generate distinguishing examples to facilitate translation, but they require a trained model to generate examples, and we organically derive examples from the structure of regexes without additional input.",
"We introduce STRUCTUREDREGEX , a new dataset for regex synthesis from natural language and examples.",
"Our dataset contains compositionally structured regexes paired with linguistically diverse language, and organically includes distinguishing examples.",
"Better methods are needed to solve this dataset; we show that such methods might generalize well to real-world settings.",
"This work was partially supported by NSF Grant IIS-1814522, NSF Grant SHF-1762299, a gift from Arm, and an equipment grant from NVIDIA.",
"The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources used to conduct this research.",
"Thanks as well to the anonymous reviewers for their helpful comments."
] | [
"abstain",
"objective",
"objective",
"result",
"objective",
"objective",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"abstain",
"result",
"method",
"abstain",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"other",
"other",
"other"
] |
[
"Transfer learning improves quality for low-resource machine translation, but it is unclear what exactly it transfers.",
"We perform several ablation studies that limit information transfer, then measure the quality impact across three language pairs to gain a black-box understanding of transfer learning.",
"Word embeddings play an important role in transfer learning, particularly if they are properly aligned.",
"Although transfer learning can be performed without embeddings, results are sub-optimal.",
"In contrast, transferring only the embeddings but nothing else yields catastrophic results.",
"We then investigate diagonal alignments with auto-encoders over real languages and randomly generated sequences, finding even randomly generated sequences as parents yield noticeable but smaller gains.",
"Finally, transfer learning can eliminate the need for a warmup phase when training transformer models in high resource language pairs.",
"Transfer learning is a common method for low-resource neural machine translation (NMT) (Zoph et al., 2016; Dabre et al., 2017; Qi et al., 2018; Nguyen and Chiang, 2017; Gu et al., 2018b).",
"However, it is unclear what settings make transfer learning successful and what knowledge is being transferred.",
"Understanding why transfer learning is successful can improve best practices while also opening the door to investigating ways to gain similar ben-efits without requiring parent models.",
"In this paper, we perform several ablation studies on transfer learning in order to understand what information is being transferred.",
"We apply a black box methodology by measuring the quality of end-to-end translation systems.",
"Typically, our experiments have a baseline that was trained from scratch, an off-the-shelf transfer learning baseline and simplified versions of the transfer learning scheme.",
"If a simplified version recovers some of the quality gains of full transfer learning, it suggests that the simplified version has captured some of the information being transferred.",
"Since information may be transferred redundantly, our claims are limited to sufficiency rather than exclusivity.",
"Transferring word embeddings is not straightforward since languages have different vocabularies.",
"Zoph et al. (2016) claimed that vocabulary alignment is not necessary, while Nguyen and Chiang (2017) and Kocmi and Bojar (2018) suggest a joint vocabulary.",
"We find that the vocabulary has to be aligned before transferring the embedding to achieve a substantial improvement.",
"Transfer learning without the embedding or with vocabulary mismatches is still possible, but with lower quality.",
"Conversely, transferring only the word embeddings can be worse than transferring nothing at all.",
"A rudimentary model of machine translation consists of alignment and token mapping.",
"We hypothesize that these capabilities are transferred across languages.",
"To test this, we experiment with transferring from auto-encoders that learn purely diagonal alignment and possibly language modelling.",
"To remove the effect of language modelling, we train auto-encoders on random strings sampled uniformly.",
"However, all of these scenarios still have simple copying behaviour, especially with tied embeddings.",
"Therefore, we also attempt a bijective vocabulary mapping from source to target, forcing the model to learn the mapping as well.",
"Curiously, parents trained with bijectively-mapped vocabularies transfer slightly better to children.",
"We then investigate transfer learning for high-resource children, where the goal is reduced training time since they mainly attain the same quality.",
"Transfer learning primarily replaces the warm-up period, though only real language parents yielded faster training.",
"Transfer learning has been successfully used in low-resource scenarios for NMT.",
"Zoph et al. (2016) gain 5 BLEU points in UzbekEnglish by transferring from FrenchEnglish.",
"Their style of transfer learning copies the entire model, including word embeddings, ignoring the vocabulary mismatch between parent and child.",
"They used separate embeddings for source and target language words, whereas tied embeddings (Press and Wolf, 2017; Vaswani et al., 2017) have since become the de-facto standard in low-resource NMT.",
"Tied embeddings provide us with the opportunity to revisit some of their findings.",
"In Section 5, we find an EnglishEnglish copy model does work as a parent with tied embeddings, whereas Zoph et al. (2016) reported no gains from a copy model with untied embeddings.",
"Methods to cope with vocabulary mismatch have improved since Zoph et al. (2016).",
"Kocmi and Bojar (2018) suggest that a shared vocabulary between the parent language and the child is benefi-cial, though this requires knowledge of the child languages when the parent is trained.",
"Addressing this issue, Gheini and May (2019) proposed a universal vocabulary for transfer learning.",
"Their universal vocabulary was obtained by jointly training the sub-word tokens across multiple languages at once, applying Romanisation to languages in non-Latin scripts.",
"However, unseen languages may only be representable in this universal vocabulary with a very aggressive and potentially sub-optimal subword segmentation.",
"Orthogonally, Kim et al. (2018); Lample et al. (2018); Artetxe et al. (2018); Kim et al. (2019) use bilingual word embedding alignment to initialise the embedding layer to tackle low resource language pairs.",
"In Section 4.2, we compare a variety of vocabulary transfer methods.",
"Prior work (Dabre et al., 2017; Nguyen and Chiang, 2017) stated that a related language is the best parent for transfer learning.",
"Lin et al. (2019) explore options to choose the best parent and conclude that the best parent language might not necessarily be related but is instead based on external factors such as the corpus size.",
"In Section 3, we try two parent models in both directions to set baselines for the rest of the paper; an exhaustive search is not our main purpose.",
"Another approach to low-resource (or even zero-shot) NMT is through multilingual models (John-son et al., 2016), which is similar to training the parent and child simultaneously.",
"A related idea creates meta-models with vocabulary residing in a shared semantic space (Gu et al., 2018a,b).",
"If there is more parallel data with a third language, often English, then pivoting through a third language can outperform direct translation (Cheng et al., 2016).",
"This approach requires enough source pivot and targetpivot parallel data, which is arguably hard in many low resource scenarios, such as Burmese, Indonesian, and Turkish.",
"Orthogonal to transfer learning, Lample et al. (2018) and Artetxe et al. (2018) have proposed a fully zero-shot approach for low resource languages that relies on aligning separately-trained word embeddings to induce an initial bilingual dictionary.",
"The dictionary is then used as the basis for a translation model.",
"However, these methods do not generalise to arbitrary language pairs (Sgaard et al., 2018).",
"Moreover, our setting presumes a small amount of parallel data in the low-resource pair.",
"We start with arguably the simplest form of transfer learning: train a parent model then switch to training with the child's dataset following Zoph et al. (2016).",
"We attempt to initialise the embedding vectors of the same tokens from the parent to the child.",
"We later investigate different approaches to transferring the embeddings.",
"As transfer learning requires a parent model, we start by sweeping different high-resource languages for the parent model to set a baseline.",
"Choosing a parent language pair is one of the first issues to solve when performing a transfer-learning experiment.",
"However, this is not a simple task.",
"Prior work (Dabre et al., 2017; Nguyen and Chiang, 2017) suggest that a related language is the best option, albeit related is not necessarily well defined.",
"Recently, Lin et al. (2019) performed a grid-search across various parent languages to determine the best criteria for selecting the optimal parent when performing transfer learning.",
"Their work showed that the best language parents might also be determined by external factors such as the corpus size, on top of the language relatedness.",
"According to the BLEU score, the difference between various parents is usually not that significant.",
"We first explore four potential parents: German and Russian from/to English.",
"From each of them, we transfer the parameters to our low-resource language pair of { Burmese, Indonesian, Turkish } to English.",
"Before presenting the results, we lay out the experimental setup used for the rest of the paper.",
"We use German-English and Russian-English datasets for our parent models.",
"Our German-English dataset is taken from the WMT17 news translation task (Bojar et al., 2017).",
"Our Russian-English is taken from the WMT18 task (Bojar et al., 2018).",
"For both pairs, we preprocess the input with byte-pair encoding (Sennrich et al., 2016b).",
"BurmeseEnglish: For our My En parallel data, we used 18k parallel sentences from the Asian Language Treebank (ALT) Project (Ding et al., 2018, 2019) collected from news articles.",
"IndonesianEnglish: Id En parallel data consists of 22k news-related sentences, which are taken from the PAN Localization BPPT corpus.",
"1 This dataset does not have a test/validation split.",
"Hence we randomly sample 2000 sentences to use as test and validation sets.",
"We augment our data by back-translating (Sennrich et al., 2016a) News Crawl from 2015.",
"Our total training set (including the back-translated sentences) consists of 88k pairs of sentences.",
"TurkishEnglish: Tr En data comes from the WMT17 news translation task (Bojar et al., 2017).",
"This data consists of 207k pairs of sentences.",
"Similar to Id En, we add a back-translation corpus from News Crawl 2015.",
"Our total training data consists of 415k sentence pairs.",
"For all language pairs, we use byte-pair encoding (Sennrich et al., 2016b) to tokenise words into subword units.",
"We use a standard transformer-base architecture with six encoder and six decoder layers for all experiments with the default hyper-parameters (Vaswani et al., 2017).",
"Training and decoding use Marian (Junczys-Dowmunt et al., 2018), while evaluation uses SacreBLEU (Post, 2018).",
"Our results on Table 1 show that there is no clear evidence that one parent is better than another.",
"Whether the non-English languages share a script or English is on the same side does not have a consistent impact.",
"The main goal of this section was to set appropriate baselines; we primarily use English German and German English as the parents.",
"Parent and child languages have a different vocabulary, so embeddings are not inherently transferable.",
"We investigate what is transferred in the embeddings and evaluate several vocabulary combination methods.",
"We first explore whether the embedding matrix contains any transferable information.",
"We divide the model into embedding parameters and everything else: inner layers.",
"Table 2 shows what happens when these parts are or are not transferred.",
"Our low-resource languages achieve better BLEU even if we only transfer the inner layers.",
"In contrast, only transferring the embeddings is not beneficial, and sometimes it is even harmful to the performance.",
"Finally, transferring all layers yields the best performance.",
"To further investigate which part of the network is more crucial to transfer, we took the best-performing child then reset either the embeddings or inner layers and restarted training.",
"We explore whether the model is capable of recovering the same or comparable quality by retraining.",
"We can look at this experiment as self' transfer learning.",
"Results are shown in Table 3.",
"When the inner layers are reset, self-transfer performs poorly (close to the quality without transfer learning at all), even though the embeddings are properly transferred.",
"Conversely, the models can somewhat restore their quality even if we reset the embedding layer.",
"This result further verifies that transferring the inner layers is the most critical aspect of transfer learning.",
"We conclude that transferring the inner layers is critical to performance, with far more impact than transferring the embeddings.",
"However, the embedding matrix has transferable information, as long as the inner layers are included.",
"Mixed recommendations exist on how to transfer embeddings between languages with different vocabularies.",
"We compare methods from previous work, namely random assignment (Zoph et al., 2016) and joint vocabularies (Nguyen and Chiang, 2017) with two additional embedding assignment strategies based on the frequency and token matching as a comparison.",
"In detail, we explore: Exclude Embedding: We do not transfer the embeddings at all.",
"As such, we show that transfer learning works without transferring the embedding layer.",
"In the present experiment, this method acts as one of the baselines.",
"Frequency Assignment: We can transfer the embedding information regardless of the vocabulary mismatch.",
"However, the toolkit sorts the words based on their frequency; therefore, embeddings are also transferred in that particular order.",
"Regardless, we can determine whether word frequency information is transferred.",
"Random Assignment: Zoph et al. (2016) suggest that randomly assigning a parent word embedding to each child word is sufficient, relying on the model to untangle the permutation.",
"This approach is simple and language-agnostic, thus universally applicable.",
"We shuf-fle the vocabulary to achieve a random assignment.",
"Joint Vocabulary: Nguyen and Chiang (2017) suggest that it is better to use a shared vocabulary between the parent and child language.",
"This can be obtained by training a joint BPE token.",
"To achieve this, we transfer the word embedding information of the common tokens.",
"Since tied embeddings are used, we share the same vocabulary between the target and source of both the parent and the child language.",
"One drawback of this technique is that we must prepare the vocabulary in advance.",
"Therefore, switching the parent or the child might require us to re-train the model.",
"Token Matching: We assign the embeddings with the same token first and randomise the rest.",
"This approach is designed to allow some word embeddings to be transferred correctly without the need to re-train the parent with every experiment, as in the case of joint vocabulary.",
"The different strategies are illustrated in Figure 1.",
"Prior experiments in Section 4.1 demonstrate that we can apply transfer learning even if we only transfer the inner layers.",
"Curiously, random assignment and frequency assignment are not better than excluding the embeddings, except for Burmese to a b c d x y a b",
"(e) Joint vocab Figure 1: Illustration of various strategies on how to transfer the embedding vector.",
"English transferred from English to German.",
"Therefore, the information in the embedding is lost when transferred to the incorrect token.",
"From these results, we conclude that the model is incapable of untangling the embedding permutation as stated by Zoph et al. (2016).",
"Transfer learning yields better results when we attempt to transfer the embeddings to the correct tokens.",
"In the joint vocabulary setting, not every token is observed in the parent language dataset; therefore, only a section of the embedding layer is correctly trained.",
"However, we still observe a significant improvement over the random and frequency-based assignment.",
"We can also transfer the embedding vectors by matching and assigning the word embedding with the same tokens.",
"Vocab matching achieves comparable results to joint vocabulary, except for the lowest-resource language, Burmese.",
"Therefore, this simple matching can be used as a cheaper alternative over a joint vocabulary.",
"On top of that, this approach is more efficient as we do not transfer and wastefully reserve extra memory for tokens that will not be seen in the child language.",
"These results suggest that word information stored in the embedding layer is transferable, as long as the vectors are assigned correctly.",
"Therefore, better ways of handling the embedding layer transfer are joint BPE and token matching, as they further improve the performance of the child language pair.",
"To understand what information is being transferred with transfer learning, we test the parent model's performance on the child language without any additional training.",
"When a pre-trained model is transferred to another language pair, the model has not yet seen the child language vocabulary.",
"When presented with an input in a new language, the model is unable to translate correctly.",
"However, as we can see in Table 5, the model manages to perform diagonal alignment properly, albeit it is mostly copying the input (on average of 75% of the time).",
"Based on this observation, we see that fallback copying behaviour, including monotonic alignment, is transferred.",
"This can be useful for named entity translation (Currey et al., 2017).",
"To test our claim, we prepare parents that implicitly learn to copy or transform input tokens diagonally.",
"We can create a copy sequence model (or auto-encoder) model by giving the model the same sentences for both source and target.",
"We pick an English monolingual dataset.",
"We also use a Chinese monolingual corpus to explore whether the chosen Parent Shared Example En De Id En src: Bank Mandiri bisa masuk dari mikro hingga korporasi .",
"monolingual language matters.",
"Besides, we can ar-tificially create a random sequence for the training set.",
"The random sequence is useful to determine whether any language-specific information is being transferred, as such information is absent in a random sequence.",
"To simulate the translation behaviour better, we also prepare a substitution parallel corpus.",
"We transform every token into another based on a predetermined 1:1 mapping.",
"We create a substitution corpus for both the English and the synthetic corpus.",
"With tied embeddings, the substitution corpus should help the model translate one token into another, instead of just copying.",
"Table 6 illustrates the 6 monolingual/synthetic parents that we use for this experiment.",
"We perform transfer learning experiments from every monolingual and synthetic parent to all three child languages, as summarised in Table 7.",
"For comparison, we also provide the result of transfer learning with an actual translation model as a parent.",
"We notice that there is no improvement in transfer learning for the Turkish model in terms of the final BLEU.",
"However, upon further investigation, transfer learning has an impact on the convergence speed, thus signalling information being transferred.",
"To measure this, we capture the validation BLEU score for Tr En after 10k training steps.",
"In general, transferring from any monolingual or synthetic parent yields better BLEU (or faster convergence for Turkish) compared to training from scratch.",
"Although, the improvement is suboptimal when compared with transfer learning from a proper parent.",
"However, we can use these gains to measure the information transferred in transfer learning.",
"In general using monolingual English is better than using monolingual Chinese.",
"In monolingual English, we can transfer the embedding information correctly with token matching.",
"Therefore, consistent with our previous experiment, embedding information is transferred.",
"Using a Chinese parent is better than using random sequences.",
"Our random sequence is uniformly sampled independently for each token.",
"Therefore, unlike a real monolingual corpus, learning language modelling from this random sequence is impossible.",
"Thus, we conclude that the model transfers some statistical properties of natural languages.",
"Transferring from a random sequence copy model yields better result compared to training the model from scratch.",
"While the improvement is minimal, we can see that a nave model that performs copying is better as a model initialisation.",
"Moreover, substitution sequence parent models perform better than their copying counterparts.",
"We suspect that copy models with tied embeddings converge to a local optimum that is a poorer initialisation for other translation models, compared to the substitution models.",
"Transfer learning with an actual NMT system as a parent still outperforms the monolingual and synthetic parents, albeit they are initially a copy model.",
"We argue that the monolingual parents perform nearly perfectly at the copying task, and have perfect diagonal alignment, and therefore overfit to this artificial setting when used as a parent.",
"Transfer learning can be used to initialise a model even if final quality does not change.",
"Compared to random initialisation, we argue that a pre-trained model functions as better initialisation.",
"Therefore, since we initialise the model better, it should converge faster.",
"This behaviour was already presented in Table 7, where the transferred model converges more rapidly.",
"However, we should explore this behaviour in a setting where faster training matters more: when training high-resource language pairs.",
"For this experiment, we take an English-to-Russian model as a parent for an English-to-German model.",
"We align the embedding with the same BPE tokens instead of using a joint vocabulary since this would require re-training the parent.",
"We also attempt to exclude the embedding completely.",
"These choices are practical in a real-world scenario, especially when we measure for efficiency.",
"In Table 8, we show that transfer learning does not improve the model's final quality.",
"However, we can see both from the Table, and visually in Figure 2, that transfer learning speeds up the convergence by up to 1.4x, assuming the parent model has been prepared before.",
"In the early stage of training, the gradients produced are quite noisy, which is particularly harmful to the transformer model (Popel and Bojar, 2018).",
"Therefore, training transformer models usually require a precise warm-up setup.",
"However, transfer Parent BLEU Num.",
"learning can be used as a better initialisation, thus skipping the noisy early training.",
"To further con-firm this, we remove the learning rate warm-up to observe the impact of a pre-trained model.",
"As shown in Figure 2, the pre-trained model remains capable of learning under more aggressive hyperparameters.",
"On the other hand, the model without pre-training fails to learn.",
"This result is congruent with the findings of Platanios et al. (2019), who found that warm-up in the Transformer can be removed with curriculum learning.",
"We demonstrate that the internal layers of the network are the most crucial for cross-lingual transfer learning.",
"The embeddings contain transferable information, as long as the vectors are mapped correctly and the inner layers are also transferred.",
"While not as optimal, we can still perform transfer learning by excluding the embedding.",
"In transfer learning, we can also transfer the alignment.",
"Transferred parents without fine-tuning will align 0 5 10 15 20 25 30 num updates x1000 0 10 20 30 v a li d a ti on BLEU Convergence per-update Baseline Baseline + No warm-up En-En Substitiution En-Ru En-Ru + Token matching En-Ru + Token matching + No warm-up Figure 2: Transfer learning effect on the convergence of a high-resource system.",
"the input diagonally and copy most of the tokens.",
"We further demonstrate that transfer learning still functions with a simple copy model, even with an artificial datasetalbeit with a reduced quality.",
"From a theoretical perspective, our results indicate that while transfer learning is effective in our scenario, it performed less transfer than previously thought.",
"Therefore, a promising research direction to investigate would involve the development and assessment of improved initialisation methods that would more efficiently yield the ben-efits of the model transfer.",
"From a practical perspective, our results indicate that we can initialise models with a pre-trained model regardless of the parent language or vocabulary handling.",
"With this perspective in mind, we can use transfer learning as a better initialisation, resulting in the child model having more stable gradients from the onset of training.",
"Therefore, models can train and converge faster, which is useful in high-resource settings.",
"With transfer learning, the model can be trained with more aggressive hy-perparameterssuch as removing the learning rate warm-up entirelyto further improve the convergence speed.",
"This result further highlights the use of transfer learning as a better model initialisation.",
"This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service ( http: //www.csd3.cam.ac.uk/ ), provided by Dell EMC and Intel using Tier-2 funding from the Engineering",
"and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council ( www.dirac.ac.uk ).",
"Alham Fikri Aji is funded by the Indonesia Endowment Fund for Education scholarship scheme.",
"Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727)."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"other",
"abstain",
"objective",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"This paper proposes the problem of Deep Question Generation (DQG), which aims to generate complex questions that require reasoning over multiple pieces of information of the input passage.",
"In order to capture the global structure of the document and facilitate reasoning, we propose a novel framework which first constructs a semantic-level graph for the input document and then encodes the semantic graph by introducing an attention-based GGNN (Att-GGNN).",
"Afterwards, we fuse the document-level and graph-level representations to perform joint training of content selection and question decoding.",
"On the HotpotQA deep-question centric dataset, our model greatly improves performance over questions requiring reasoning over multiple facts, leading to state-of-the-art performance.",
"The code is publicly available at https://github.com/WING-NUS/ SG-Deep-Question-Generation .",
"Question Generation (QG) systems play a vital role in question answering (QA), dialogue system, and automated tutoring applications by enriching the training QA corpora, helping chatbots start conversations with intriguing questions, and automatically generating assessment questions, respectively.",
"Existing QG research has typically focused on generating factoid questions relevant to one fact obtainable from a single sentence (Duan et al., 2017; Zhao et al., 2018; Kim et al., 2019), as exemplified in Figure 1",
"a).",
"However, less explored has been the comprehension and reasoning aspects of questioning, resulting in questions that are shallow and not reflective of the true creative human process.",
"People have the ability to ask deep questions about events, evaluation, opinions, synthesis, or reasons, usually in the form of Why , Why-not , How , Input Paragraph A: Pago Pago International Airport Pago Pago International Airport, also known as TafunaAirport, is a public airport located 7 miles (11.3 km) southwest of the central business district of Pago Pago, in the village and plains of Tafuna on the island of Tutuila in American Samoa, an unincorporated territory of the United States .",
"What-if , which requires an in-depth understanding of the input source and the ability to reason over disjoint relevant contexts; e.g. , asking Why did Gollum betray his master Frodo Baggins?",
"after reading the fantasy novel The Lord of the Rings .",
"Learning to ask such deep questions has intrinsic research value concerning how human intelligence embodies the skills of curiosity and integration, and will have broad application in future intelligent systems.",
"Despite a clear push towards answering deep questions (exemplified by multi-hop reading comprehension (Cao et al., 2019) and commonsense QA (Rajani et al., 2019)), generating deep questions remains un-investigated.",
"There is thus a clear need to push QG research towards generating deep questions that demand higher cognitive skills.",
"In this paper, we propose the problem of D eep Q uestion G eneration (DQG), which aims to generate questions that require reasoning over multiple pieces of information in the passage.",
"Figure 1",
"b) shows an example of deep question which requires a comparative reasoning over two disjoint pieces of evidences.",
"DQG introduces three additional challenges that are not captured by traditional QG systems.",
"First, unlike generating questions from a single sentence, DQG requires document-level understanding, which may introduce long-range dependencies when the passage is long.",
"Second, we must be able to select relevant contexts to ask meaningful questions; this is non-trivial as it involves understanding the relation between disjoint pieces of information in the passage.",
"Third, we need to ensure correct reasoning over multiple pieces of information so that the generated question is answerable by information in the passage.",
"To facilitate the selection and reasoning over disjoint relevant contexts, we distill important information from the passage and organize them as a semantic graph , in which the nodes are extracted based on semantic role labeling or dependency parsing, and connected by different intraand inter-semantic relations (Figure 2).",
"Semantic relations provide important clues about what contents are question-worthy and what reasoning should be performed; e.g. , in Figure 1, both the entities Pago Pago International Airport and Hoonah Airport have the located at relation with a city in United States.",
"It is then natural to ask a comparative question: e.g. , Are Pago Pago International Airport and Hoonah Airport both on American territory?",
".",
"To efficiently leverage the semantic graph for DQG, we introduce three novel mechanisms: (1) proposing a novel graph encoder, which incorporates an attention mechanism into the Gated Graph Neural Network (GGNN) (Li et al., 2016), to dynamically model the interactions between different semantic relations; (2) enhancing the word-level passage embeddings and the node-level semantic graph representations to obtain an unified semantic-aware passage representations for question decoding; and (3) introducing an auxiliary content selection task that jointly trains with question decoding, which assists the model in selecting relevant contexts in the semantic graph to form a proper reasoning chain.",
"We evaluate our model on HotpotQA (Yang et al., 2018), a challenging dataset in which the questions are generated by reasoning over text from separate Wikipedia pages.",
"Experimental results show that our model incorporating both the use of the semantic graph and the content selection task improves performance by a large margin, in terms of both automated metrics (Section 4.3) and human evaluation (Section 4.5).",
"Error analysis (Section 4.6) validates that our use of the semantic graph greatly reduces the amount of semantic errors in generated questions.",
"In summary, our contributions are: (1) the very first work, to the best of our knowledge, to investigate deep question generation, (2) a novel framework which combines a semantic graph with the input passage to generate deep questions, and (3) a novel graph encoder that incorporates attention into a GGNN approach.",
"Question generation aims to automatically generate questions from textual inputs.",
"Rule-based techniques for QG usually rely on manually-designed rules or templates to transform a piece of given text to questions (Heilman, 2011; Chali and Hasan, 2012).",
"These methods are confined to a variety of transformation rules or templates, making the approach difficult to generalize.",
"Neural-based approaches take advantage of the sequence-to-sequence (Seq2Seq) framework with attention (Bahdanau et al., 2014).",
"These models are trained in an end-to-end manner, requiring far less labor and enabling better language flexibility, compared against rule-based methods.",
"A comprehensive survey of QG can be found in Pan et al. (2019).",
"Many improvements have been proposed since the first Seq2Seq model of Du et al. (2017): applying various techniques to encode the answer information, thus allowing for better quality answer-focused questions (Zhou et al., 2017; Sun et al., 2018; Kim et al., 2019); improving the training via combining supervised and reinforcement learning to maximize question-specific rewards (Yuan et al., 2017); and incorporating various linguistic features into the QG process (Liu et al., 2019a).",
"However, these approaches only consider sentence-level QG.",
"In contrast, our work focus on the challenge of generating deep questions with multi-hop reasoning over document-level contexts.",
"Recently, work has started to leverage paragraph-level contexts to produce better questions.",
"Du and Cardie (2018) incorporated coreference knowledge to better encode entity connections across documents.",
"Zhao et al. (2018) applied a gated self-attention mechanism to encode contextual information.",
"However, in practice, semantic structure is difficult to distil solely via self-attention over the entire document.",
"Moreover, despite considering longer contexts, these works are trained and evaluated on SQuAD (Rajpurkar et al., 2016), which we argue as insufficient to evaluate deep QG because more than 80% of its questions are shallow and only relevant to information confined to a single sentence (Du et al., 2017).",
"Given the document D and the answer A , the objective is to generate a question Q that satisfies:",
"where document D and answer A are both sequences of words.",
"Different from previous works, we aim to generate a Q which involves reasoning over multiple evidence sentences E = { s i } ni =1 , where s i is a sentence in D .",
"Also, unlike traditional settings, A may not be a sub-span of D because reasoning is involved to obtain the answer.",
"We propose an encoderdecoder framework with two novel features specific to DQG: (1) a fused word-level document and node-level semantic graph representation to better utilize and aggregate the semantic information among the relevant disjoint document contexts, and (2) joint training over the question decoding and content selection tasks to improve selection and reasoning over relevant information.",
"Figure 2 shows the general architecture of the proposed model, including three modules: semantic graph construction , which builds the DP-or SRL-based semantic graph for the given input; semantic-enriched document representation , employing a novel Attention-enhanced Gated Graph Neural Network (Att-GGNN) to learn the semantic graph representations, which are then fused with the input document to obtain graph-enhanced document representations; and joint-task question generation , which generates deep questions via joint training of node-level content selection and word-level question decoding.",
"In the following, we describe the details of each module.",
"As illustrated in the introduction, the semantic relations between entities serve as strong clues in determining what to ask about and the reasoning types it includes.",
"To distill such semantic information in the document, we explore both SRL-(Semantic Role Labelling) and DP(Dependency Parsing) based methods to construct the semantic graph.",
"Refer to Appendix A for the details of graph construction.",
"SRL-based Semantic Graph.",
"The task of Semantic Role Labeling (SRL) is to identify what semantic relations hold among a predicate and its associated participants and properties (M`arquez et al., 2008), including who did what to whom, etc.",
"For each sentence, we extract predicate-argument tuples via SRL toolkits 1 .",
"Each tuple forms a subgraph where each tuple element ( e.g. , arguments, location, and temporal) is a node.",
"We add inter-tuple edges between nodes from different tuples if they have an inclusive relationship or potentially mention the same entity.",
"1 We employ the state-of-the-art BERT-based model (Shi and Lin, 2019) in the AllenNLP toolkit to perform SRL.",
"DP-based Semantic Graph.",
"We employ the biaffine attention model (Dozat and Manning, 2017) for each sentence to obtain its dependency parse tree, which is further revised by removing unimportant constituents ( e.g. , punctuation) and merging consecutive nodes that form a complete semantic unit.",
"Afterwards, we add inter-tree edges between similar nodes from different parse trees to construct a connected semantic graph.",
"The left side of Figure 2 shows an example of the DP-based semantic graph.",
"Compared with SRL-based graphs, DP-based ones typically model more fine-grained and sparse semantic relations, as discussed in Appendix A.3.",
"Section 4.3 gives a performance comparison on these two formalisms.",
"We separately encode the document D and the semantic graph G via an RNN-based passage encoder and a novel Att-GGNN graph encoder, respectively, then fuse them to obtain the semantic-enriched document representations for question generation.",
"Document Encoding.",
"Given the input document D = [ w 1 , , w l ] , we employ the bi-directional Gated Recurrent Unit (GRU) (Cho et al., 2014) to encode its contexts.",
"We represent the encoder hidden states as XD = [ x 1 , , x l ] , where x i = [ (cid:126) x i ; (cid:126) x i ] is the context embedding of w i as a concatenation of its bi-directional hidden states.",
"Node Initialization.",
"We define the SRLand DP-based semantic graphs in an unified way.",
"The semantic graph of the document D is a heterogeneous multi-relation graph G = ( V , E ) , where V = { v i } i =1: N v and E = { e k } k =1: N e denote graph nodes and the edges connecting them, where N v and N e are the numbers of nodes and edges in the graph, respectively.",
"Each node v = { w j } n v j = m v is a text span in D with an associated node type t v , where m v / n v is the starting / ending position of the text span.",
"Each edge also has a type t e that represents the semantic relation between nodes.",
"We obtain the initial representation h 0 v for each node v = { w j } n v j = m v by computing the word-to-node attention.",
"First, we concatenate the last hidden states of the document encoder in both directions as the document representation d D = [ (cid:126) x l ; (cid:126) x 1 ] .",
"Afterwards, for a node v , we calculate the attention distribution of d D over all the words { w m v , , w j , , w n v } in v as follows: vj = exp( Attn ( d D , x j )) (cid:80) n v k = m n exp( Attn ( d D , x k )) (2) where vj is the attention coefficient of the document embedding d D over a word w j in the node v .",
"The initial node representation h 0 v is then given by the attention-weighed sum of the embeddings of its constituent words, i.e. , h 0 v = (cid:80) n v j = m v vj x j .",
"Word-to-node attention ensures each node to capture not only the meaning of its constituting part but also the semantics of the entire document.",
"The node representation is then enhanced with two additional features: the POS embedding p v and the answer tag embedding a v to obtain the enhanced initial node representations h 0 v = [ h 0 v ; p v ; a v ] .",
"Graph Encoding.",
"We then employ a novel Att-GGNN to update the node representations by aggregating information from their neighbors.",
"To represent multiple relations in the edge, we base our model on the multi-relation Gated Graph Neural Network (GGNN) (Li et al., 2016), which provides a separate transformation matrix for each edge type.",
"For DQG, it is essential for each node to pay attention to different neighboring nodes when performing different types of reasoning.",
"To this end, we adopt the idea of Graph Attention Networks (Velickovic et al., 2017) to dynamically determine the weights of neighboring nodes in message passing using an attention mechanism.",
"Formally, given the initial hidden states of graph H 0 = { h 0 i }| v i V , Att-GGNN conducts K layers of state transitions, leading to a sequence of graph hidden states H 0 , H 1 , , HK , where H k = { h ( k ) j }| v j V .",
"At each state transition, an aggregation function is applied to each node v i to collect messages from the nodes directly connected to v i .",
"The neighbors are distinguished by their incoming and outgoing edges as follows: h ( k ) N (cid:96) ( i ) = (cid:88) v j N (cid:96) ( i ) ( k ) ij W t eij h ( k ) j (3) h ( k ) N (cid:97) ( i ) = (cid:88) v j N (cid:97) ( i ) ( k ) ij W t eji h ( k ) j (4) where N (cid:97) ( i ) and N (cid:96) ( i ) denote the sets of incoming and outgoing edges of v i , respectively.",
"W t eij denotes the weight matrix corresponding to the edge type t e ij from v i to v j , and ( k ) ij is the attention coefficient of v i over v j , derived as follows: ( k ) ij = exp ( Attn ( h ( k ) i , h ( k ) j )) (cid:80) t N ( i ) exp( Attn ( h ( k ) i , h ( k ) t )) (5) where Attn ( , ) is a single-layer neural network implemented as a T [ WA h ( k ) i ; WA h ( k ) j ] , here a and WA are learnable parameters.",
"Finally, an GRU is used to update the node state by incorporating the aggregated neighboring information.",
"After the K -th state transition, we denote the final structure-aware representation of node v as h Kv .",
"Feature Aggregation.",
"Finally, we fuse the semantic graph representations HK with the document representations XD to obtain the semantic-enriched document representations ED for question decoding, as follows: ED = Fuse ( XD , HK ) (7) We employ a simple matching-based strategy for the feature fusion function Fuse.",
"For a word w i D , we match it to the smallest granularity node that contains the word w i , denoted as v M ( i ) .",
"We then concatenate the word representation x i with the node representation h Kv M ( i ) , i.e. , e i = [ x i ; h Kv M ( i ) ] .",
"When there is no corresponding node v M ( i ) , we concatenate x i with a special vector close to (cid:126) 0 .",
"The semantic-enriched representation ED provides the following important information to ben-efit question generation: (1) semantic information : the document incorporates semantic information explicitly through concatenating with semantic graph encoding; (2) phrase information : a phrase is often represented as a single node in the semantic graph ( cf Figure 2 as an example); therefore its constituting words are aligned with the same node representation; (3) keyword information : a word ( e.g. , a preposition) not appearing in the semantic graph is aligned with the special node vector mentioned before, indicating the word does not carry important information.",
"Based on the semantic-rich input representations, we generate questions via jointly training on two tasks: Question Decoding and Content Selection .",
"Question Decoding.",
"We adopt an attention-based GRU model (Bahdanau et al., 2014) with copying (Gu et al., 2016; See et al., 2017) and coverage mechanisms (Tu et al., 2016) as the question decoder.",
"The decoder takes the semantic-enriched representations ED = { e i , w i D} from the encoders as the attention memory to generate the output sequence one word at a time.",
"To make the decoder aware of the answer, we use the average word embeddings in the answer to initialize the decoder hidden states.",
"At each decoding step t , the model learns to attend over the input representations ED and compute a context vector c t based on ED and the current decoding state s t .",
"Next, the copying probability P cpy [0 , 1] is calculated from the context vector c t , the decoder state s t and the decoder input y t 1 .",
"P cpy is used as a soft switch to choose between generating from the vocabulary, or copying from the input document.",
"Finally, we incorporate the coverage mechanisms (Tu et al., 2016) to encourage the decoder to utilize diverse components of the input document.",
"Specifically, at each step, we maintain a coverage vector cov t , which is the sum of attention distributions over all previous decoder steps.",
"A coverage loss is computed to penalize repeatedly attending to the same locations of the input document.",
"Content Selection.",
"To raise a deep question, humans select and reason over relevant content.",
"To mimic this, we propose an auxiliary task of content selection to jointly train with question decoding.",
"We formulate this as a node classification task, i.e. , deciding whether each node should be involved in the process of asking, i.e. , appearing in the reasoning chain for raising a deep question, exemplified by the dark-colored nodes in Figure",
"2. To this end, we add one feed-forward layer on top of the final-layer of the graph encoder, taking the output node representations HK for classification.",
"We deem a node as positive ground-truth to train the content selection task if its contents appear in the ground-truth question or act as a bridge entity between two sentences.",
"Content selection helps the model to identify the question-worthy parts that form a proper reasoning chain in the semantic graph.",
"This synergizes with the question decoding task which focuses on the fluency of the generated question.",
"We jointly train these two tasks with weight sharing on the input representations.",
"To evaluate the model's ability to generate deep questions, we conduct experiments on HotpotQA (Yang et al., 2018), containing 100K crowd-sourced questions that require reasoning over separate Wikipedia articles.",
"Each question is paired with two supporting documents that contain the evidence necessary to infer the answer.",
"In the DQG task, we take the supporting documents along with the answer as inputs to generate the question.",
"However, state-of-the-art semantic parsing models have difficulty in producing accurate semantic graphs for very long documents.",
"We therefore pre-process the original dataset to select relevant sentences, i.e. , the evidence statements and the sentences that overlap with the ground-truth question, as the input document.",
"We follow the original data split of HotpotQA to pre-process the data, resulting in 90,440 / 6,072 examples for training and evaluation, respectively.",
"Following previous works, we employ BLEU 14 (Papineni et al., 2002), METEOR (Lavie and Agarwal, 2007), and ROUGE-L (Lin, 2004) as automated evaluation metrics.",
"BLEU measures the average n -gram overlap on a set of reference sentences.",
"Both METEOR and ROUGE-L specialize BLEU's n-gram overlap idea for machine translation and text summarization evaluation, respectively.",
"Critically, we also conduct human evaluation, where annotators evaluate the generation quality from three important aspects of deep questions: fluency, relevance, and complexity.",
"Seq2Seq + Attn (Bahdanau et al., 2014): the basic Seq2Seq model with attention, which takes the document as input to decode the question.",
"NQG++ (Zhou et al., 2017): which enhances the Seq2Seq model with a feature-rich encoder containing answer position, POS and NER information.",
"ASs2s (Kim et al., 2019): learns to decode questions from an answer-separated passage encoder together with a keyword-net based answer encoder.",
"S2sa-at-mp-gsa (Zhao et al., 2018): an enhanced Seq2Seq model incorporating gated self-attention and maxout-pointers to encode richer passage-level contexts (B4 in Table 1).",
"We also implement a version that uses coverage mechanism and our answer encoder for fair comparison, labeled B5.",
"CGC-QG (Liu et al., 2019a): another enhanced Seq2Seq model that performs word-level content selection before generation; i.e. , making decisions on which words to generate and to copy using rich syntactic features, such as NER, POS, and DEP.",
"Implementation Details.",
"For fair comparison, we use the original implementations of ASs2s and CGC-QG to apply them on HotpotQA.",
"All baselines share a 1-layer GRU document encoder and question decoder with hidden units of 512 dimensions.",
"Word embeddings are initialized with 300-dimensional pre-trained GloVe (Pennington et al., 2014).",
"For the graph encoder, the node embedding size is 256, plus the POS and answer tag embeddings with 32 -D for each.",
"The number of layers K is set to 3 and hidden state size is 256.",
"Other settings for training follow standard best practice 2 .",
"The top two parts of Table 1 show the experimental results comparing against all baseline methods.",
"We make three main observations:",
"1. The two versions of our model P1 and P2 consistently outperform all other baselines in BLEU.",
"Specifically, our model with DP-based semantic graph (P2) achieves an absolute improvement of 2.05 in BLEU-4 ( +15 . 2% ), compared to the document-level QG model which employs gated self-attention and has been enhanced with the same decoder as ours (B5).",
"This shows the significant effect of semantic-enriched document representations, equipped with auxiliary content selection for generating deep questions.",
"2. The results of CGC-QG (B6) exhibits an unusual pattern compared with other methods, achieving the best METEOR and ROUGE-L but worst BLEU-1 among all baselines.",
"As CGC-QG performs word-level content selection, we observe that it tends to include many irrelevant words in the question, leading to lengthy questions ( 33 . 7 tokens on average, while 17 . 7 for ground-truth questions and 19 . 3 for our model) that are unanswerable or with semantic errors.",
"Our model greatly reduces the error with node-level content selection based on semantic relations (shown in Table 3).",
"2 All models are trained using Adam (Kingma and Ba, 2015) with mini-batch size 32 .",
"The learning rate is initially set to 0 .",
"001 , and adaptive learning rate decay applied.",
"We adopt early stopping and the dropout rate is set to 0 .",
"3 for both encoder and decoder and 0 .",
"1 for all attention mechanisms.",
"3. While both SRL-based and DP-based semantic graph models (P1 and P2) achieve state-of-the-art BLEU, DP-based graph (P2) performs slightly better ( +3 . 3% in BLEU-4).",
"A possible explanation is that SRL fails to include fine-grained semantic information into the graph, as the parsing often results in nodes containing a long sequence of tokens.",
"We also perform ablation studies to assess the impact of different components on the model performance against our DP-based semantic graph (P2) model.",
"These are shown as Rows A14 in Table",
"1. Similar results are observed for the SRL-version.",
"Impact of semantic graph.",
"When we do not employ the semantic graph (A2, -w/o Semantic Graph), the BLEU-4 score of our model dramatically drops to 13 .",
"85 , which indicates the necessity of building semantic graphs to model semantic relations between relevant content for deep QG.",
"Despite its vital role, result of A1 shows that generating questions purely from the semantic graph is unsatisfactory.",
"We posit three reasons: 1) the semantic graph alone is insufficient to convey the meaning of the entire document, 2) sequential information in the passage is not captured by the graph, and that 3) the automatically built semantic graph inevitably contains much noise.",
"These reasons necessitate the composite document representation.",
"Impact of Att-GGNN.",
"Using a normal GGNN (A3, -w/o Multi-Relation & Attention) to encode the semantic graph, performance drops to 14.15 ( 3 . 61% ) in BLEU-4 compared to the model with Att-GGNN (A4, -w/o Multi-Task).",
"This reveals that different entity types and their semantic relations provide auxiliary information needed to generate meaningful questions.",
"Our Att-GGNN model (P2) incorporates attention into the normal GGNN, effectively leverages the information across multiple node and edge types.",
"Impact of joint training.",
"By turning off the content selection task (A4, -w/o Multi-Task), the BLEU-4 score drops from 15 .",
"53 to 14 .",
"66 , showing the contribution of joint training with the auxiliary task of content selection.",
"We further show that content selection helps to learn a QG-aware graph representation in Section 4.7, which trains the model to focus on the question-worthy content and form a correct reasoning chain in question decoding.",
"We conduct human evaluation on 300 random test samples consisting of: 100 short ( < 50 tokens), 100 medium (50-200 tokens), and 100 long ( > 200 tokens) documents.",
"We ask three workers to rate the 300 generated questions as well as the ground-truth Types Examples S2sa-at-CGC-QG DP-Graph mp-gsa Correct (Pred.) Between Kemess Mine and Colomac Mine, which mine was operated earlier?",
"questions between 1 (poor) and 5 (good) on three criteria: (1) Fluency , which indicates whether the question follows the grammar and accords with the correct logic; (2) Relevance , which indicates whether the question is answerable and relevant to the passage; (3) Complexity , which indicates whether the question involves reasoning over multiple sentences from the document.",
"We average the scores from raters on each question and report the performance over five top models from Table",
"1. Raters were unaware of the identity of the models in advance.",
"Table 2 shows our human evaluation results, which further validate that our model generates questions of better quality than the baselines.",
"Let us explain two observations in detail: Compared against B4 (S2sa-at-mp-gsa), improvements are more salient in terms of Fluency ( +13 . 33% ) and Complexity ( +8 . 48% ) than that of Relevance ( +6 . 27% ).",
"The reason is that the baseline produces more shallow questions (affect-ing complexity) or questions with semantic errors (affecting fluency).",
"We observe similar results when removing the semantic graph (A2. w/o Semantic Graph).",
"These demonstrate that our model, by incorporating the semantic graph, produces questions with fewer semantic errors and utilizes more context.",
"All metrics decrease in general when the input document becomes longer, with the most obvious drop in Fluency.",
"When input contexts is long, it becomes difficult for models to capture question-worthy points and conduct correct reasoning, leading to more semantic errors.",
"Our model tries to alleviate this problem by introducing semantic graph and content selection, but question quality drops as noise increases in the semantic graph when the document becomes longer.",
"In order to better understand the question generation quality, we manually check the sampled outputs, and list the 5 main error sources in Table",
"3. Among them, Semantic Error, Redun-dant, and Unanswerable are noticeable errors for all models.",
"However, we find that baselines have more unreasonable subjectpredicateobject collocations (semantic errors) than our model.",
"Especially, CGC-QG (B6) has the largest semantic error rate of 26 .",
"4% among the three methods; it tends to copy irrelevant contents from the input document.",
"Our model greatly reduces such semantic errors to 8 .",
"3% , as we explicitly model the semantic relations between entities by introducing typed semantic graphs.",
"The other noticeable error type is Unanswerable; i.e. , the question is correct itself but cannot be answered by the passage.",
"Again, CGC-QG remarkably produces more unanswerable questions than the other two models, and our model achieves comparable results with S2sa-at-mp-gsa (B4), likely due to the fact that answerability requires a deeper understanding of the document as well as commonsense knowledge.",
"These issues cannot be fully addressed by incorporating semantic relations.",
"Examples of questions generated by different models are shown in Figure",
"3. 4.7 Analysis of Content Selection We introduced the content selection task to guide the model to select relevant content and form proper reasoning chains in the semantic graph.",
"To quantitatively validate the relevant content selection, we calculate the alignment of node attention Last One the second studio album Na Na Confessions Confessions Robert Shapiro Matthew Hart a 2004 Americanteen musical comedy film by the Christian rock band Superchic[k].",
"Question(Ours) What is the name of the American teen musical comedy in which the second studio album by the Christian rock band Superchic[k].",
"Na Na appeared ?",
"Question(Humans) Which song by Last One Picked appeared in a 2004 American teen musical comedy film directed by Sara Sugarman ?",
"Question(Baseline) Who directed the 2004 American musical comedy Na in the film confessions Na ?",
"Question (CGC) Last One Picked is the second studio album by which 2004 American teen musical comedy film directed by Sara Sugarman and produced by Robert Shapiro and Matthew Hart for Walt Disney Pictures ?",
"v i with respect to the relevant nodes (cid:80) v i RN v i and irrelevant nodes (cid:80) v i / RN v i , respectively, under the conditions of both single training and joint training, where RN represents the ground-truth we set for content selection.",
"Ideally, a successful model should focus on relevant nodes and ignore irrelevant ones; this is reflected by the ratio between (cid:80) v i RN v i and (cid:80) v i / RN v i .",
"When jointly training with content selection, this ratio is 1 .",
"214 compared with 1 .",
"067 under single-task training, consistent with our intuition about content selection.",
"Ideally, a successful model should concentrate on parts of the graph that help to form proper reasoning.",
"To quantitatively validate this, we compare the concentration of attention in singleand multi-task settings by computing the entropy H = (cid:80) v i log v i of the attention distributions.",
"We find that content selection increases the entropy from 3.51 to 3.57 on average.",
"To gain better insight, in Figure 3, we visualize the semantic graph attention distribution of an example.",
"We see that the model pays more attention (is darker) to the nodes that form the reasoning chain (the highlighted paths in purple), consistent with the quantitative analysis.",
"We propose the problem of DQG to generate questions that requires reasoning over multiple disjoint pieces of information.",
"To this end, we propose a novel framework which incorporates semantic graphs to enhance the input document representations and generate questions by jointly training with the task of content selection.",
"Experiments on the HotpotQA dataset demonstrate that introducing semantic graph significantly reduces the semantic errors, and content selection benefits the selection and reasoning over disjoint relevant contents, leading to questions with better quality.",
"There are at least two potential future directions.",
"First, graph structure that can accurately represent the semantic meaning of the document is crucial for our model.",
"Although DP-based and SRL-based semantic parsing are widely used, more advanced semantic representations could also be explored, such as discourse structure representation (van No-ord et al., 2018; Liu et al., 2019b) and knowledge graph-enhanced text representations (Cao et al., 2017; Yang et al., 2019).",
"Second, our method can be improved by explicitly modeling the reasoning chains in generation of deep questions, inspired by related methods (Lin et al., 2018; Jiang and Bansal, 2019) in multi-hop question answering.",
"This research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative.",
"Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore."
] | [
"objective",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"result",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"other"
] |
[
"We explore the challenge of action prediction from textual descriptions of scenes, a testbed to approximate whether text inference can be used to predict upcoming actions.",
"As a case of study, we consider the world of the Harry Potter fantasy novels and inferring what spell will be cast next given a fragment of a story.",
"Spells act as keywords that abstract actions (e.g. Alohomora' to open a door) and denote a response to the environment.",
"This idea is used to automatically build HPAC , a corpus containing 82 836 samples and 85 actions.",
"We then evaluate different baselines.",
"Among the tested models, an LSTM -based approach obtains the best performance for frequent actions and large scene descriptions, but approaches such as logistic regression behave well on infrequent actions.",
"Natural language processing ( NLP ) has achieved significant advances in reading comprehension tasks (Chen et al., 2016; Salant and Berant, 2017).",
"These are partially due to embedding methods (Mikolov et al., 2013; Devlin et al., 2018) and neural networks (Rosenblatt, 1958; Hochreiter and Schmidhuber, 1997; Vaswani et al., 2017), but also to the availability of new resources and challenges.",
"For instance, in cloze-form tasks (Hermann et al., 2015; Bajgar et al., 2016), the goal is to predict the missing word given a short context.",
"Weston et al. (2015) presented baBI, a set of proxy tasks for reading comprenhension.",
"In the SQuAD corpus (Rajpurkar et al., 2016), the aim is to answer questions given a Wikipedia passage.",
"Kocisky et al. (2018) introduce NarrativeQA, where answering the questions requires to process entire stories.",
"In a related line, Frermann et al. (2017) use fictional crime scene investigation data, from the CSI series, to define a task where the models try to answer the question: who committed the crime?'.",
"In an alternative line of work, script induction (Schank and Abelson, 1977) has been also a useful approach to evaluate inference and semantic capabilities of NLP systems.",
"Here, a model processes a document to infer new sequences that re-flect events that are statistically probable (e.g. go to a restaurant, be seated, check the menu, . . . ).",
"For example, Chambers and Jurafsky (2008) introduce narrative event chains, a representation of structured knowledge of a set of events occurring around a protagonist.",
"They then propose a method to learn statistical scripts, and also introduce two different evaluation strategies.",
"With a related aim, Pichotta and Mooney (2014) propose a multi-event representation of statistical scripts to be able to consider multiple entities.",
"These same authors (Pichotta and Mooney, 2016) have also studied the abilities of recurrent neural networks for learning scripts, generating upcoming events given a raw sequence of tokens, using BLEU (Pap-ineni et al., 2002) for evaluation.",
"This paper explores instead a new task: action prediction from natural language descriptions of scenes.",
"The challenge is addressed as follows: given a natural language input sequence describing the scene, such as a piece of a story coming from a transcript, the goal is to infer which action is most likely to happen next.",
"Contribution We introduce a fictional-domain English corpus set in the world of Harry Potter novels.",
"The domain is motivated by the existence of a variety of spells in these literary books, associated with keywords that can be seen as unambiguous markers for actions that potentially relate to the previous context.",
"This is used to automatically create a natural language corpus coming from hundreds of users, with different styles, interests and writing skills.",
"We then train a number of standard baselines to predict upcoming actions, a task that requires to be aware of the context.",
"In particular, we test a number of generic models, from a simple logistic regression to neural models.",
"Experiments shed some light about their strengths and weaknesses and how these are related to the frequency of each action, the existence of other semantically related actions and the length of the input story.",
"To build an action prediction corpus, we need to: (1) consider the set of actions, and (2) collect data where these occur.",
"Data should come from different users, to approximate a real natural language task.",
"Also, it needs to be annotated, determining that a piece of text ends up triggering an action.",
"These tasks are however time consuming, as they require annotators to read vast amounts of large texts.",
"In this context, machine comprehension resources usually establish a compromise between their complexity and the costs of building them (Hermann et al., 2015; Kocisky et al., 2018).",
"We rely on an intuitive idea that uses transcripts from the Harry Potter world to build up a corpus for textual action prediction.",
"The domain has a set of desirable properties to evaluate reading comprehension systems, which we now review.",
"Harry Potter novels define a variety of spells.",
"These are keywords cast by witches and wizards to achieve purposes, such as turning on a light (Lu-mos'), unlocking a door (Alohomora') or killing (Avada Kedavra').",
"They abstract complex and non-ambiguous actions.",
"Their use also makes it possible to build an automatic and self-annotated corpus for action prediction.",
"The moment a spell occurs in a text represents a response to the environment, and hence, it can be used to label the preceding text fragment as a scene description that ends up triggering that action.",
"Table 1 illustrates it with some examples from the original books.",
"This makes it possible to consider texts from the magic world of Harry Potter as the domain for the action prediction corpus, and the spells as the set of eligible actions.",
"1 Determining the length of the preceding context, namely snippet , that has to be 1 Note that the corpus is built in an automatic way and some occurrences might not correspond to actions, but for example, to a description of the spell or even some false positive samples.",
"Related to this, we have not censored the content of the stories, so some of them might contain adult content.",
"considered as the scene description is however not trivial.",
"This paper considers experiments ( 4) using snippets with the 32, 64, 96 and 128 previous tokens to an action.",
"We provide the needed scripts to rebuild the corpus using arbitrary lengths.",
"2 2.2 Data crawling The number of occurrences of spells in the original Harry Potter books is small (432 occurrences), which makes it difficult to train and test a machine learning model.",
"However, the amount of available fan fiction for this saga allows to create a large corpus.",
"For HPAC , we used fan fiction (and only fan fiction texts) from https://www.",
"fanfiction.net/book/Harry-Potter/ and a version of the crawler by Milli and Bamman (2016).",
"3 We collected Harry Potter stories written in English and marked with the status com-pleted'.",
"From these we extracted a total of 82 836 spell occurrences, that we used to obtain the scene descriptions.",
"Table 2 details the statistics of the corpus (see also Appendix A).",
"Note that similar to Twitter corpora, fan fiction stories can be deleted over time by users or admins, causing losses in the dataset.",
"4 Preprocessing We tokenized the samples with (Manning et al., 2014) and merged the occurrences of multi-word spells into a single token.",
"This work addresses the task as a classification problem, and in particular as a sequence to label classification problem.",
"For this reason, we rely on standard models used for this type of task: multinomial logistic regression, a multi-layered perceptron, convolutional neural networks and long short-term memory networks.",
"We outline the essentials of each of these models, but will treat them as black boxes.",
"In a related line, Kaushik and Lipton (2018) discuss the need of providing rigorous baselines that help better understand the improvement coming from future and complex models, and also the need of not demanding architectural novelty when introducing new datasets.",
"https://github.com/aghie/hpac 3 Due to the website's Terms of Service, the corpus cannot be directly released.",
"4 They also can be modified, making it unfeasible to retrieve some of the samples.",
"Text fragment Action Ducking under Peeves, they ran for their lives, right to the end of the corridor where they slammed into a door and it was locked.",
"This is it!' Ron moaned, as they pushed helplessly at the door, We're done for! This is the end!' They could hear footsteps, Filch running as fast as he could toward Peeves's shouts.",
"Oh, move over', Hermione snarled.",
"She grabbed Harry's wand, tapped the lock, and whispered, Alohomora '.",
"Unlock the door And then, without warning, Harry's scar exploded with pain.",
"It was agony such as he had never felt in all his life; his wand slipped from his fingers as he put his hands over his face; his knees buckled; he was on the ground and he could see nothing at all; his head was about to split open.",
"From far away, above his head, he heard a high, cold voice say, Kill the spare.' A swishing noise and a second voice, which screeched the words to the night: Avada Kedavra ' Kill a target Harry felt himself being pushed hither and thither by people whose faces he could not see.",
"Then he heard Ron yell with pain.",
"What happened?' said Hermione anxiously, stopping so abruptly that Harry walked into her.",
"Ron, where are you? Oh, this is stupid' Lumos ' Turn on a light Table 1: Examples from the Harry Potter books showing how spells map to reactions to the environment.",
"special case of language modelling, where the output vocabulary is restricted to the size of the ac-tion' vocabulary.",
"Also, note that the performance for this task is not expected to achieve a perfect accuracy, as there may be situations where more than one action is reasonable, and also because writers tell a story playing with elements such as surprise or uncertainty.",
"The source code for the models can be found in the GitHub repository mentioned above.",
"Notation w 1: n denotes a sequence of words w 1 , ..., w n that represents the scene, with w i V .",
"F ( ) is a function parametrized by .",
"The task is cast as F : V n A , where A is the set of actions.",
"The input sentence w 1: n is encoded as a one-hot vector, v (total occurrence weighting scheme).",
"Multinomial Logistic Regression Let MLR ( v ) be an abstraction of a multinomial logistic regression parametrized by , the output for an input v is computed as the arg max a AP ( y = a | v ) , where P ( y = a | v ) is a softmax function, i.e, P ( y = a | v ) = e Wa v (cid:80) Aa (cid:48) e Wa (cid:48) v .",
"MultiLayer Perceptron We use one hidden layer with a rectifier activation function ( relu ( x ) = max (0 , x ) ).",
"The output is computed as MLP ( v ) = softmax ( W 2 relu ( W v + b )+ b 2 ) .",
"The input sequence is represented as a sequence of word embeddings, w 1: n , where w i is a concatenation of an internal embedding learned during the training process for the word w i , and a pre-trained embedding extracted from GloVe (Pen-nington et al., 2014) 5 , that is further fine-tuned.",
"Long short-term memory network (Hochre-iter and Schmidhuber, 1997): The output for an element w i also depends on the output of w i 1 .",
"The LSTM ( w 1: n ) 6 takes as input a sequence of word embeddings and produces a sequence of hidden outputs, h 1: n ( h i size set to 128).",
"The last output of the LSTM , h n , is fed to a MLP .",
"Convolutional Neural Network (LeCun et al., 1995; Kim, 2014).",
"It captures local properties over continuous slices of text by applying a convolution layer made of different filters.",
"We use a wide convolution, with a window slice size of length 3 and 250 different filters.",
"The convolutional layer uses a relu as the activation function.",
"The output is fed to a max pooling layer, whose output vector is passed again as input to a MLP .",
"Setup All MLP 's have 128 input neurons and 1 hidden layer.",
"We trained up to 15 epochs using mini-batches (size=16), Adam (lr= 0 . 001 ) (Kingma and Ba, 2015) and early stopping.",
"Table 3 shows the macro and weighted F-scores for the models considering different snippet sizes.",
"7 5 http://nlp.stanford.edu/data/glove.",
"6 n is set to be equal to the length of the snippet.",
"7 As we have addressed the task as a classification problem, we will use precision, recall and F-score as the evaluation metrics.",
"To diminish the impact of random seeds and local minima in neural networks, results are averaged across 5 runs.",
"8 Base' is a majority-class model that maps everything to Avada Kedavra', the most common action in the training set.",
"This helps test whether the models predict above chance performance.",
"When using short snippets (size=32), disparate models such as our MLR , MLP and LSTM s achieve a similar performance.",
"As the snippet size is increased, the LSTM -based approach shows a clear improvement on the weighted scores 9 , something that happens only marginally for the rest.",
"However, from Table 3 it is hard to find out what the approaches are actually learning to predict.",
"To shed some light, Table 4 shows their performance according to a ranking metric, recall at k .",
"The results show that the LSTM -based approach is the top performing model, but the MLP obtains just slightly worse results.",
"Recall at 1 is in both cases low, which suggests that the task is indeed complex and that using just LSTM s is not enough.",
"It is also possible to observe that even if the models have difficulties to correctly predict the action as a first option, they develop certain sense of the scene and consider the right one among their top choices.",
"Table 5 delves into this by splitting the performance of the model into infrequent and frequent actions (above the average, i.e. those that occur more than 98 times in the training set, a total of 20 actions).",
"There is a clear gap between 8 Some macro F-scores do not lie within the Precision and Recall due to this issue.",
"9 For each label, we compute their average, weighted by the number of true instances for each label.",
"The F-score might be not between precision and recall.",
"the performance on these two groups of actions, with a 50 points difference in recall at 5 .",
"Also, a simple logistic regression performs similar to the LSTM on the infrequent actions.",
"Error analysis 10 Some of the misclassifications made by the LSTM approach were semantically related actions and counter-actions.",
"For example, Colloportus' (to close a door) was never predicted.",
"The most common mis-classification (14 out of 41) was Alohomora' (to unlock a door), which was 5 times more frequent in the training corpus.",
"Similarly, Nox' (to extinguish the light from a wand) was correctly predicted 6 times, meanwhile 36 mis-classifications corre-10 Made over one of the runs from the LSTM -based approach and setting the snippet size to 128 tokens.",
"spond to Lumos' (to light a place using a wand), which was 6 times more frequent in the training set.",
"Other less frequent spells that denote vision and guidance actions, such as Point me' (the wand acts a a compass pointing North) and Homenum revelio' (to revel a human presence) were also mainly misclassified as Lumos'.",
"This is an indicator that the LSTM approach has difficulties to disambiguate among semantically related actions, especially if their occurrence was unbalanced in the training set.",
"This issue is in line with the tendency observed for recall at k .",
"Spells intended for much more specific purposes, according to the books, obtained a performance significantly higher than the average, e.g. F-score(Riddikulus')=63.54, F-score(Expecto Pa-tronum')=55.49 and F-score(Obliviate')=47.45.",
"As said before, the model is significantly biased towards frequent actions.",
"For 79 out of 84 gold actions in the test set, we found that the samples tagged with such actions were mainly classified into one of the top 20 most frequent actions.",
"Human comparison We collected human annotations from 208 scenes involving frequent actions.",
"The accuracy/F-macro/F-weighted was 39.20/30.00/40.90.",
"The LSTM approach obtained 41.26/25.37/39.86.",
"Overall, the LSTM approach obtained a similar performance, but the lower macro F-score by the LSTM could be an indicator that humans can distinguish within a wider spectrum of actions.",
"As a side note, super-human performance it is not strange in other NLP tasks, such as sentiment analysis (Pang et al., 2002).",
"We explored action prediction from written stories.",
"We first introduced a corpus set in the world of Harry Potter's literature.",
"Spells in these novels act as keywords that abstract actions.",
"This idea was used to label a collection of fan fiction.",
"We then evaluated standard NLP approaches, from logistic regression to sequential models such as LSTM s.",
"The latter performed better in general, although vanilla models achieved a higher performance for actions that occurred a few times in the training set.",
"An analysis over the output of the LSTM approach also revealed difficulties to discriminate among semantically related actions.",
"The challenge here proposed corresponded to a fictional domain.",
"A future line of work we are interested in is to test whether the knowledge learned with this dataset could be transferred to real-word actions (i.e. real-domain setups), or if such transfer is not possible and a model needs to be trained from scratch.",
"This work has received support from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01), and from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150)."
] | [
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other"
] |
[
"There is an increasing interest in studying natural language and computer code together, as large corpora of programming texts become readily available on the Internet.",
"For example, StackOverflow currently has over 15 million programming related questions written by 8.5 million users.",
"Meanwhile, there is still a lack of fundamental NLP techniques for identifying code tokens or software-related named entities that appear within natural language sentences.",
"In this paper, we introduce a new named entity recognition (NER) corpus for the computer programming domain, consisting of 15,372 sentences annotated with 20 fine-grained entity types.",
"We trained in-domain BERT representations (BERTOver-flow) on 152 million sentences from StackOverflow, which lead to an absolute increase of +10 F 1 score over off-the-shelf BERT.",
"We also present the SoftNER model which achieves an overall 79.10 F 1 score for code and named entity recognition on StackOverflow data.",
"Our SoftNER model incorporates a context-independent code token classifier with corpus-level features to improve the BERT-based tagging model.",
"1 1 Introduction Recently there has been significant interest in modeling human language together with computer code (Quirk et al., 2015; Iyer et al., 2016; Yin and Neubig, 2018), as more data becomes available on websites such as StackOverflow and GitHub.",
"This is an ambitious yet promising direction for scaling up language understanding to richer domains.",
"Access to domain-specific NLP tools could help a wide range of downstream applications.",
"For example, extracting software knowledge bases from 1 Our code and data are available at: https:// github.com/jeniyat/StackOverflowNER/ Figure 1: Examples of software-related named entities in a StackOverflow post.",
"text (Movshovitz-Attias and Cohen, 2015), developing better quality measurements of StackOverflow posts (Ravi et al., 2014), finding similar questions (Amirreza Shirani, 2019) and more.",
"However, there is a lack of NLP resources and techniques for identifying software-related named entities (e.g., variable names or application names) within natural language texts.",
"In this paper, we present a comprehensive study that investigates the unique challenges of named entity recognition in the social computer programming domain.",
"These named entities are often ambiguous and have implicit reliance on the accompanied code snippets.",
"For example, the word list ' commonly refers to a data structure, but can also be used as a variable name (Figure 1).",
"In order to recognize these entities, we propose a software-related named entity recognizer (Soft-NER) that utilizes an attention network to combine the local sentence-level context with corpus-level information extracted from the code snippets.",
"Using our newly annotated corpus of 15,372 sentences in StackOverflow, we rigorously test our proposed SoftNER model, which outperforms BiLSTM-CRF model and fine-tuned BERT model for identifying 20 types of software-related named entities.",
"Our key contributions are the following: A new StackOverflow NER corpus manually annotated with 20 types of named entities, including all in-line code within natural language sentences (2).",
"We demonstrate that NER in the software domain is an ideal benchmark task for testing effectiveness of contextual word representations, such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019), due to its inherent polysemy and salient reliance on context.",
"An in-domain trained neural SoftNER tagger for StackOveflow (3) that can recognize 20 fine-grained named entity types related to software developing.",
"We also tested its performance on GitHub data of readme files and issue reports.",
"A code token recognizer (3.1) that utilizes StackOveflow code snippets to capture the spelling patterns of code-related tokens, and consistently improves the NER tagger.",
"In-domain pretrained ELMo and BERT representations (3.3) on 152 million sentences from StackOverflow that significantly outperforms off-the-shelf ELMo and leads to more than 21 points increase in F 1 score over off-the-shelf BERT.",
"Overall, our named entity tagger (SoftNER) achieves a 79.10% F 1 score on StackOverflow and 61.08% F 1 score on GitHub data for extracting the 20 software related named entity types.",
"We believe this performance is sufficiently strong to be practically useful.",
"We have released our data and code, including the named entity tagger, our annotated corpus, annotation guideline, a specially designed tokenizer, and pre-trained StackOverflow BERT and ELMo embeddings.",
"In this section, we describe the construction of our StackOverflow NER corpus.",
"We randomly selected 1,237 question-answer threads from StackOverflow 10-year archive (from September 2008 to March 2018) and manually annotated them with 20 types of entities.",
"For each question, four answers were annotated, including the accepted answer, the most upvoted answer, as well as two randomly selected answers (if they exist).",
"Table 1 shows the statistics of our corpus.",
"40 % of the question-answer threads were double-annotated, which are used as the development and test sets in our experiments (4).",
"We also annotated 6,501 sentences from GitHub readme files and issue reports as additional evaluation data.",
"We defined and annotated 20 types of fine-grained entities, including 8 code-related entities and 12 natural language entities.",
"The code entities include mentions of CLASS , VARIABLE , INLINECODE , FUNCTION , LIBRARY , VALUE , DATATYPE , and HTML XML TAG .",
"Whereas the natural language entities include mentions of APPLICATION , UI ELEMENT , LANGUAGE , DATASTRUCTURE , ALGORITHM , FILETYPE , FILENAME , VERSION , DEVICE , OS, WEBSITE , and USERNAME .",
"Our annotation guideline was developed through several pilots and further updated with notes to resolve difficult cases as the annotation progressed.",
"2 Each entity type was defined to encourage maximum span length (e.g., SGML parser ' instead of SGML ').",
"We annotated noun phrases without including modifiers (e.g., C ' instead of Plain C '), except a few special cases (e.g., rich text ' as a common FILETYPE ).",
"On average, an entity contains about 1.5 tokens.",
"While VARIABLE , FUNCTION and CLASS names mostly consist of only a single token, our annotators found that some are written as multiple tokens when mentioned in natural language text (e.g., array list ' for ArrayList ' in Figure 1).",
"The annotators were asked to read relevant code blocks or software repositories to make a decision, if needed.",
"Annotators also searched Google or Wikipedia to categorize unfamiliar cases.",
"The annotators were asked to update, correct, or add annotations from the user provided (cid:104) code (cid:105) markdown tags.",
"StackOverflow users can utilize (cid:104) code (cid:105) markdowns to highlight the code entities 2 Our annotation guideline is available at: https:// github.com/jeniyat/StackOverflowNER/ .",
"within the natural language sentences.",
"However, in reality, many users do not enclose the code snippets within the (cid:104) code (cid:105) tags; and sometimes use them to highlight non-code elements, such as email addresses, user names, or natural language words.",
"While creating the StackOverflow NER corpurs, we found that 59.73% of code-related entities are not marked by the StackOverflow users.",
"Moreover, only 75.54% of the (cid:104) code (cid:105) enclosed texts are actually code-related, while 10.12% used to are highlighting natural language texts.",
"The rest of cases are referring to non-code entities, such as SOFTWARENAMES and VERSIONS .",
"While markdown tag could be a useful feature for entity segmentation (3.1.3), we emphasize the importance of having a human annotated corpus for training and evaluating NLP tools in the software domain.",
"Our corpus was annotated by four annotators who are college students majored in computer science.",
"We used a web-based annotation tool, BRAT (Stenetorp et al., 2012), and provided annotators with links to the original post on StackOverflow.",
"For every iteration, each annotator was given 50 question-answer threads to annotate, 20 of which were double-annotated.",
"An adjudicator then discussed disagreements with annotators, who also cross-checked the 30 single-annotated questions in each batch.",
"The inter-annotator agreement is 0.62 before adjudication, measured by span-level Cohen's Kappa (Cohen, 1960).",
"To better understand the domain adaptability of our work, we further annotated the readme files and issue reports from 143 randomly sampled repositories in the GitHub dump (Gousios and Spinellis, 2012) (from October 29, 2007 to December 31, 2017).",
"We removed all the code blocks from the issue reports and readme files collected from these 143 repositories.",
"The resulting GitHub NER dataset consists of 6,510 sentences and 10,963 entities of 20 types labeled by two in-house annotators.",
"The inter-annotator agreement of this dataset is 0.68, measured by span-level Cohen's Kappa.",
"We designed a new tokenizer, SOTOKENIZER specifically for the social computer programming domain.",
"StackOverflow and GitHub posts exhibit common features of web texts, including abbreviations, emoticons, URLs, ungrammatical sentences and spelling errors.",
"We found that tokenization is non-trivial as many code-related tokens are mistakenly split by the existing web-text tokenizers, including the CMU Twokenizer (Gimpel et al., 2011), Stanford TweetTokenizer (Manning et al., 2014), and NLTK Twitter Tokenizer (Bird et al., 2009): txScope.Complete() [ txScope' .' Complete' (' )' ] std::condition variable [ std' :' :' condition variable' ] math.h [ math' .' h' ] (cid:104) span (cid:105) [ (cid:104) ' span' (cid:105) ' ] a==b [ a' =' =' b' ] Therefore, we implemented a new tokenizer, using Twokenizer 3 as the starting point and added additional regular expression rules to avoid splitting code-related tokens.",
"The extraction of software-related named entities imposes significant challenges as it requires resolving a significant amount of unseen tokens, inherent polysemy, and salient reliance on context.",
"Unlike news or biomedical data, spelling patterns and long-distance dependencies are more crucial in the software domain to resolve ambiguities and categorize unseen words.",
"Taken in isolation, many tokens are highly ambiguous and can refer to either programming concepts or common English words, such as: go ', react ', spring ', while ', if ', select '.",
"To address these challenges, we design the SoftNER model that leverages sentential context to disambiguate and domain-specific character representations to handle rare words.",
"Figure 2 shows the architecture of our model, which consists of primarily three components: An input embedding layer (3.1) that extracts contextualized embeddings from the BERT base model and two new domain-specific embeddings for each word in the input sentence.",
"A embedding attention layer (3.2) that combines the three word embeddings using an attention network.",
"A linear-CRF layer that predicts the entity type of each word using the attentive word representations from the previous layer.",
"3 https://github.com/myleott/ ark-twokenize-py Figure 2: Our SoftNER model.",
"For each word in the input sentence, we extract in-domain BERT (Devlin et al., 2019) representations and two new domain-specific embeddings produced by",
"(i) a Code Recognizer , which represents if a word can be part of a code entity regardless of context; and",
"(ii) an Entity Segmenter , that predicts whether a word is part of any named entity in the given sentence.",
"Each domain-specific embedding is created by passing a binary value, predicted by a network independent from the SoftNER model.",
"We describe the two standalone auxiliary models that generate these domain-based vectors below.",
"Texts in the software engineering domain contain programming language tokens, such as variable names or code segments, interspersed with natural language words.",
"This makes input representations pre-trained on general book or Wikipedia texts unsuitable for software domain.",
"We pre-trained different in-domain word embeddings, including BERT (BERTOverflow), ELMo (ELMoVerflow), and GloVe (GloVerflow) vectors on the StackOverflow 10-year archive 4 of 152 million sentences and 2.3 billion tokens (3.3).",
"Humans with prior programming knowledge can easily recognize that list() ' is code, list ' can be either code or a common English word, whereas listing ' is more likely a non-code natural language token.",
"We thus introduce a code recognition module to capture such prior probability of how 4 https://archive.org/details/ stackexchange likely a word can be a code token without considering any contextual information.",
"It is worth noting that this standalone code recognition model is also useful for language-and-code research, such as retrieving code snippets based on natural language queries (Iyer et al., 2016; Giorgi and Bader, 2018; Yao et al., 2019) Our code recognition model ( Code Recognizer ) is a binary classifier.",
"It utilizes language model features and spelling patterns to predict whether a word is a code entity.",
"The input features include unigram word and 6-gram character probabilities from two language models (LMs) that are trained on the Gigaword corpus (Napoles et al., 2012) and all the code-snippets in the StackOverflow 10-year archive respectively.",
"We also pre-trained FastText (Joulin et al., 2016) word embeddings using these code-snippets, where a word vector is represented as a sum of its character ngrams.",
"We first transform each ngram probability into a k -dimensional vector using Gaussian binning (Maddela and Xu, 2018), which has shown to improve the performance of neural models using numeric features (Sil et al., 2017; Liu et al., 2016; Maddela and Xu, 2018).",
"We then feed the vec-torized features into a linear layer, concatenate the output with FastText character-level embeddings, and pass them through another hidden layer with sigmoid activation.",
"We predict the token as a code-entity if the output probability is greater than 0 .",
"5 .",
"This binary prediction is then converted into a vector and used as an input to the SoftNER model.",
"The segmentation task refers to identifying entity spans without assigning entity category.",
"Entity segmentation is simpler and less error-prone than entity recognition as it does not require a fine-grained classification of the entity types.",
"In fact, a segmentation model ( Entity Segmenter ) trained on our annotated StackOverflow corpus can achieve 90.41% precision on the dev set (de-tails in 4.5), predicting whether each token is a part of entity in the given sentence.",
"Our segmentation model fine-tunes the in-domain BERT after concatenating it with two hand-crafted features: Word Frequency represents the word occurrence count in the training set.",
"As many code tokens are defined by individual users, they occur much less frequently than normal English words.",
"In fact, code and non-code tokens have an average frequency of 1.47 and 7.41 respectively in our corpus.",
"Moreover, ambiguous token that can be either code or non-code entities, such as windows ', have a much higher average frequency of 92.57.",
"To leverage this observation, we include word frequency as a feature, converting the scalar value into a k -dimensional vector by Gaussian binning (Maddela and Xu, 2018).",
"Code Markdown indicates whether the given token appears inside a (cid:104) code (cid:105) markdown tag in the StackOverflow post.",
"It is worth noting that (cid:104) code (cid:105) tags are noisy as users do not always enclose inline code in a (cid:104) code (cid:105) tag or sometimes use the tag to highlight non-code texts (details in 2.1).",
"Nevertheless, we find it helpful to include the markdown information as a feature as it improves the performance of our segmentation model.",
"The inclusion of hand-crafted features is influ-enced by Wu et al. (2018), where word-shapes and POS tags were shown to improve the performance of sequence tagging models.",
"For each input word w i in the input sentence, we have three embeddings: BERT ( w i 1 ), Code Recognizer ( w i 2 ), and Entity Segmenter ( w i 3 ).",
"We introduce the embedding-level attention it ( t { 1 , 2 , 3 } ), which captures each embedding's contribution towards the meaning of the word, to combine them together.",
"To compute it , we pass the input embeddings through a bidirectional GRU and generate their corresponding hidden representations h it = GRU ( w it ) .",
"These vectors are then passed through a non-linear layer, which outputs u it = tanh ( W e h it + b e ) .",
"We introduce an embedding-level context vector u e , which is randomly initialized and updated during the training process.",
"This context vector is combined with the hidden embedding representation using a softmax function to extract weight of the embeddings: it = exp ( u itT u e ) (cid:80) t exp ( u itT u e ) .",
"Finally, we create the word vector by a weighted sum of all the information from different embeddings as word i = (cid:80) t it h it .",
"The aggregated word vector word i is then fed into a linear-CRF layer, which predicts the entity category for each word based the BIO tagging schema.",
"We use PyTorch framework to implement our proposed SoftNER model and its two auxiliary components, namely code recognition and entity segmentation.",
"The input to the SoftNER model include 850-dimensional vectors extracted from both the code recognizer and the entity segmenter.",
"We pre-trained BERT base , ELMo and GloVe vectors on 152 million sentences from the StackOverflow, excluding sentences from the 1,237 posts in our annotated corpus.",
"The pretraining of the 768-dimensional BERT base model with 64,000 WordPiece vocabulary took 7 days on a Google TPU.",
"The pre-training of 1024-dimensional ELMo vectors took 46 days on 3 NVIDIA Titan X Pascal GPUs.",
"The pre-training of 300-dimensional GloVe embeddings (Penning-ton et al., 2014) with a frequency cut-off of 5 took 8 hours on a server with 32 CPU cores and 386 GB memory.",
"We train the SoftNER model and the two auxiliary models separately.",
"Our segmentation model follows the simple BERT fine-tuning architecture except for the input, where BERT embeddings are concatenated with 100-dimensional code markdown and 10-dimensional word frequency features.",
"We set the number of bins k to 10 for Gaussian vectorization.",
"Our code recognition model is a feedforward network with two hidden layers and a single output node with sigmoid activation.",
"In this section, we show that our SoftNER model outperforms all the previous NER approaches on the StackOverflow and GitHub data.",
"We also discuss the factors pivotal to the performance of our model, namely pre-trained in-domain BERT embeddings and two domain-specific auxiliary tasks.",
"We train and evaluate our SoftNER model on the StackOverflow NER corpus of 9,352 train, 2,942 development and 3,115 test sentences we constructed in 2.",
"We use the same data for our segmentation model but replace all the entity tags with an I-ENTITY tag.",
"For the code recognition model, we created a new lexicon of 6,000 unique tokens randomly sampled from the training set of the StackOverflow NER corpus.",
"Each token was labelled independently without context as CODE , AMBIGUOUS or NON-CODE by two annotators with computer science background.",
"The inter-annotator agreement was 0.89, measured by Cohen's Kappa.",
"After discarding disagreements, we divided the remaining 5,312 tokens into 4,312 train and 1,000 test instances.",
"Then, we merged AMBIGUOUS and NON-CODE categories to facilitate binary classification.",
"We name this dataset of 5312 individual tokens as SOLEXICON .",
"We compare our model with the following baseline and state-of-the-art approaches:",
"A Feature-based Linear CRF model which uses the standard orthographic, context and gazetteer features, along with the code markdown tags and handcrafted regular expressions to recognize code entities (details in Appendix A).",
"A BiLSTM-CRF model with in-domain ELMo embeddings ( ELMoVerflow ; details in 3.3).",
"This architecture is used as the state-of-the-art baseline named-entity recognition models in various domains (Lample et al., 2016; Kulkarni et al., 2018; Dai et al., 2019).",
"An Attentive BiLSTM-CRF model with in-domain ELMo embeddings as well as domain-specific embeddings from the code recognizer and the entity segmenter.",
"This model combines these three word embeddings using an attention network and then utilizes a BiLSTM-CRF layer to predict the entity type of each input word (details in Appendix B).",
"A Fine-tuned out-of-domain BERT model where we fine-tune the original BERT base cased checkpoint 5 on our annotated corpus.",
"A Fine-tuned in-domain BERT model where we fine-tune the in-domain pre-trained BERT base ( BERTOverflow ; details in 3.3) cased checkpoint 6 on our annotated corpus.",
"Table 2 shows the precision ( P ), recall ( R ) and F 1 score comparison of different models evaluated on the StackOverflow NER corpus.",
"Our SoftNER model outperforms the existing NER approaches in all the three metrics.",
"Fine-tuning over in-domain trained BERT (BERTOverflow), in particular, improves F 1 score by more than 10 points in comparison to using the original BERT.",
"Table 3 shows the performance comparison between in-domain and out-of-domain word embeddings.",
"We consider off-the-shelf BERT (De-vlin et al., 2019), ELMo (Peters et al., 2018) and GloVe (Pennington et al., 2014) vectors trained on newswire and web texts as out-of-domain embeddings.",
"When using the BiLSTM-CRF model (Lample et al., 2016; Kulkarni et al., 2018; Dai et al., 2019), we observe a large increase of 13.64 F 1 score when employing in-domain ELMo (ELMoVerflow) representations over in-domain GloVe (GloVeOverflow), and an increase of 15.71 F 1 score over out-of-domain ELMo.",
"We found that fine-tuning out-of-domain BERT (De-vlin et al., 2019) outperforms the out-of-domain 5 https://github.com/google-research/ BERT 6 https://github.com/lanwuwei/ BERTOverflow/ P R F 1 out-of-domain Word Embeddings GloVe (newswire+Wiki+Web) 61.71 49.08 54.67 ELMo (newswire+Wiki) 67.66 47.41 55.75 Fine-tuned BERT (book+Wiki) 45.92 77.02 57.54 In-Domain Word Embeddings GloVeOverflow 66.28 51.28 57.82 ELMoVerflow 74.44 68.71 71.46 Fine-tuned BERTOverflow 72.11 70.51 71.30 Table 3: Performance of fine-tuned BERT model, BiLSTM-CRF model with GloVe and ELMo embeddings on the dev set of our StackOverflow NER corpus.",
"ELMo (Table 3), although it underperforms in-domain ELMo (ELMoVerflow) by 12.92 F 1 score and in-domain BERt (BERTOverflow) by 12.76 F 1 score (Table 2).",
"Similarly, in-domain ELMo outperforms the out-of-domain fine-tuned BERT by 10.67 F 1 score on Github data (Table 8; more details in 5).",
"It is worth noting that, the performance improvements from contextual word embeddings are more pronounced on our software domain than on newswire and biomedical domains.",
"Original ELMo and BERT outperform GloVe by 2.06 and 2.12 points in F 1 respectively on CoNLL 2003 NER task of newswire data (Peters et al., 2018; Devlin et al., 2019).",
"For biomedical domain, in-domain ELMo outperforms out-of-domain ELMo by only 1.33 points in F 1 on the BC2GM dataset (Sheikhshabbafghi et al., 2018).",
"We hypothesized that the performance gains from the in-domain contextual embeddings are largely aided by the model's ability to handle ambiguous and unseen tokens.",
"The increase in performance is especially notable (41% 70% accuracy) for unseen tokens, which constitute 38% of the tokens inside gold entity spans in our dataset.",
"This experiment also demonstrates that our annotated NER corpus provides an attractive test-bed for measuring the adaptability of different contextual word representations.",
"The domain-specific vectors produced by the Code Recognizer and the Entity Segmenter are also crucial for the overall performance of our SoftNER model.",
"Table 4 shows an ablation study.",
"Removing code recognizer vectors and entity segmenter vectors results in a drop of 2.19 and 3.69 in F 1 scores respectively.",
"If we replace embedding-level attention with a simple concatenation of em-P R F 1 SoftNER 78.81 81.72 80.24 Embedding Attention 75.83 79.09 77.43 Code Recognizer 78.76 77.35 78.05 Entity Segmenter 77.82 75.32 76.55 Table 4: Ablation study of SoftNER on the dev set of StackOverflow NER corpus.",
"FastText Embeddings 76.12 81.69 78.81 Table 5: Evaluation results and feature ablation of our code recognition model on SOLEXICON test set of 1000 manually labeled unique tokens, which are sampled from the train set of StackOverflow NER corpus.",
"beddings, the performance also drop by 2.81 F 1 .",
"In addition, we evaluate the effectiveness of our two domain-specific auxiliary systems on their respective tasks.",
"Code Recognition: Table 5 compares the performance of our code recognition model with other baselines on the SLEXICON test set (4.1), which consists of 1,000 random words from the train set of StackOverflow NER corpus classified as either a code or a non-code token.",
"The baselines include:",
"(i) a Most Frequent Label baseline, which assigns the most frequent label according to the human annotation in SOLEXICON train set; and",
"(ii) a frequency baseline, which learns a threshold over token frequency in the train set of StackOverflow NER corpus using a decision tree classifier.",
"Our model outperforms both baselines in terms of F 1 score.",
"Although the most frequent label baseline achieves better precision than our model, it performs poorly on unseen tokens resulting in a large drop in recall and F 1 score.",
"The ablation experiments show that the FastText word embeddings along with the character and word-level features are crucial for the code recognition model.",
"Entity Segmentation: Table 6 shows the performance of our segmentation model on the dev set of our StackOverflow corpus, where the entity tags are replaced by an I-ENTITY tag.",
"Our model achieves an F 1 score of 88.09 and with 90.41% precision and 85.89% recall.",
"Incorporating word frequency and code markdown feature increases the F 1 score by 1.57 and 2.66 points respectively.",
"The low 10.5 F 1 score of Stanford NERP R F 1 Stanford NER Tagger 63.02 5.74 10.52 Our Entity Segmentation Model 90.41 85.89 88.09 Word Frequency 88.32 84.79 86.52 Code Markdown 86.23 84.64 85.43 Table 6: Evaluation of our segmentation model on the dev set of the StackOverflow NER corpus.",
"tagger (Manning et al., 2014), which is trained on newswire text, demonstrates the importance of domain-specific tools for the software engineering domain.",
"Based on our manual inspection, the incorrect predictions made by NER systems on StackOverflow data can be largely classified into the following two categories (see examples in Table 7):",
"Segmentation Mismatch refers to the cases where model predicts the boundary of entities incorrectly.",
"Our SoftNER model reduces such segmentation errors by 89.36% compared to the fine-tuned BERTOverflow baseline.",
"Entity-Type Mismatch refers to the errors where a code entity (e.g., names of variables) is predicted as a non-code entity (e.g., names of devices), and vice-versa.",
"Our SoftNER model reduces such entity type errors by 13.54% compared to the fine-tuned BERTOverflow baseline.",
"As illustrated in Figure 3, our SoftNER model reduced the errors in both categories by incorporating the auxiliary outputs from segmenter and code recognizer model.",
"To understand the domain adaptability of our StackOverflow based SoftNER, we evaluate its performance on readme files and issue reports from 143 randomly sampled repositories in the GitHub dump (Gousios and Spinellis, 2012).",
"We also trained ELMo embeddings (ELMoGithub) on 4 million sentences from randomly sampled 5,000 GitHub repositories.",
"Table 8 shows that the performance of our SoftNER model using StackOverflow ELMo embeddings is similar to the top performing BiLSTM-CRF model using GitHub ELMo embeddings with a difference of only 1.61 points in F 1 .",
"We also did not observe any significant gain after adding Segmentation Mismatch Entity-Type Mismatch Table 7: Representative examples of system errors.",
"Fine-tuned BERTOverflow 61.71 58.75 60.19 SoftNER ( BERTOverflow ) 61.92 60.26 61.08 Table 8: Evaluation on the GitHub NER dataset of readme files and issue posts.",
"All the models are trained on our StackOverflow NER corpus.",
"Our SoftNER model performs close to BiLSTM-CRF model trained on the GitHub ELMo embeddings.",
"the code recognizer and segmenter vectors to the Github ELMo embeddings.",
"We think one likely explanation is that GitHub data contains less code-related tokens when compared to StackOverflow.",
"The percentage of code-related entity tokens is 63.20% in GitHub and 77.21% in StackOverflow.",
"Overall, we observe a drop of our SoftNER tagger from 79.10 F 1 on StackOverflow (Table 2) to 61.08 F 1 on GitHub data (Table 8) in F 1 due to domain mismatch.",
"However, we believe that our NER tagger still achieves sufficient performance to be useful for applications on GitHub.",
"7 We leave investigation of semi-supervised learning and other domain adaptation approaches for future work.",
"The CoNLL 2003 dataset (Sang and De Meul-der, 2003) is a widely used benchmark for named entity recognition, which contains annotated newswire text from the Reuters RCV1 corpus.",
"State-of-the-art approaches on this dataset (Baevski et al., 2019) use a bidirectional LSTM (Lample et al., 2016; Ma and Hovy, 2016) with conditional random field (Collobert et al., 2011) and contextualized word representations (McCann et al., 2017; Peters et al., 2018; Devlin et al., 2019).",
"Named entity recognition has been explored for new domains and languages, such as social media (Finin et al., 2010; Ritter et al., 2011; Plank et al., 2014; Derczynski et al., 2015; Limsopatham and Collier, 2016; Aguilar et al., 2017), biomedical texts (Collier and Kim, 2004; Greenberg et al., 2018; Kulkarni et al., 2018), multilingual texts (Benajiba et al., 2008; Xie et al., 2018) and code-switched corpora (Aguilar et al., 2018; Ball and Garrette, 2018).",
"Various methods have been investigated for handling rare entities, for example incorporating external context (Long et al., 2017) or approaches that make use of distant supervision (Choi et al., 2018; Yang et al., 2018; Onoe and Durrett, 2019).",
"There has been relatively little prior work on named entity recognition in the software engineering domain.",
"Ye et al. (2016) annotated 4,646 sentences from StackOverflow with five named entity types (Programming Language, Platform, API, Tool-Library-Framework and Software Stan-dard).",
"The authors used a traditional feature-based CRF to recognize these entities.",
"In contrast, we present a much larger annotated corpus consisting of 15,372 sentences labeled with 20 fine-grained entity types.",
"We also develop a novel attention based neural NER model to extract those fine-grained entities.",
"In this work, we investigated the task of named entity recognition in the social computer programming domain.",
"We developed a new NER corpus of 15,372 sentences from StackOverflow and 6,510 sentences from GitHub annotated with 20 fine-grained named entities.",
"We demonstrate that this new corpus is an ideal benchmark dataset for contextual word representations, as there are many challenging ambiguities that often require long-distance context to resolve.",
"We also proposed a novel attention based model, named SoftNER, that outperforms the state-of-the-art NER models on this dataset.",
"Furthermore, we investigated the important sub-task of code recognition.",
"Our code recognition model captures additional spelling information beyond then contextual word representations and consistently helps to improve the NER performance.",
"We believe our corpus, StackOverflow-specific BERT embeddings and named entity tagger will be useful for various language-and-code tasks, such as code retrieval, software knowledge base extraction and automated question-answering.",
"We thank anonymous reviewers for their thoughtful comments.",
"We also thank NVIDIA, Google, and Ohio Supercomputer Center (Center, 2012) for providing GPU/TPU computing resources; Wuwei Lan for kindly helping to train in-domain BERT on StackOverflow data; Sydney Lee, Rita Tong, Lillian Chow, and Raleigh Potluri for help with data annotation.",
"This research is supported in part by the NSF awards IIS-1822754 and IIS-1845670, ODNI and IARPA via the BETTER program contract 19051600004, ARO and DARPA via the SocialSim program contract W911NF-17-C-0095, Criteo Faculty Research Award to Wei Xu, and Amazon Faculty Research Award to Alan Ritter.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the offi-cial policies, either expressed or implied, of NSF, ODNI, IARPA, ARO, DARPA or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other"
] |
[
"We propose a multi-task, probabilistic approach to facilitate distantly supervised relation extraction by bringing closer the representations of sentences that contain the same Knowledge Base pairs.",
"To achieve this, we bias the latent space of sentences via a Variational Autoencoder ( VAE ) that is trained jointly with a relation classifier.",
"The latent code guides the pair representations and influences sentence reconstruction.",
"Experimental results on two datasets created via distant supervision indicate that multi-task learning results in performance benefits.",
"Additional exploration of employing Knowledge Base priors into the VAE reveals that the sentence space can be shifted towards that of the Knowledge Base, offering interpretability and further improving results 1 .",
"Distant supervision (DS) is a setting where information from existing, structured knowledge, such as Knowledge Bases (KB), is exploited to automatically annotate raw data.",
"For the task of relation extraction, this setting was popularised by Mintz et al. (2009).",
"Sentences containing a pair of interest were annotated as positive instances of a relation, if and only if the pair was found to share this relation in the KB.",
"However, due to the strictness of this assumption, relaxations were proposed, such as the at-least-one assumption introduced by Riedel et al. (2010): Instead of assuming that all sentences in which a known related pair appears express the relationship, we assume that at least one of these sentences (namely a bag of sentences) expresses the relationship.",
"Figure 1 shows example bags for two entity pairs.",
"The usefulness of distantly supervised relation extraction (DSRE) is reflected in facilitating automatic data annotation, as well as the usage of such data to train models for KB population (Ji and Grishman, 2011).",
"However, DSRE suffers from noisy instances, long-tail relations and unbalanced bag sizes.",
"Typical noise reduction methods have focused on using attention (Lin et al., 2016; Ye and Ling, 2019) or reinforcement learning (Qin et al., 2018b; Wu et al., 2019).",
"For long-tail relations, relation type hierarchies and entity descriptors have been proposed (She et al., 2018; Zhang et al., 2019; Hu et al., 2019), while the limited bag size is usually tackled through incorporation of external data (Beltagy et al., 2019), information from KBs (Vashishth et al., 2018) or pre-trained language models (Alt et al., 2019).",
"Our goal is not to investigate noise reduction, since it has already been widely addressed.",
"Instead, we aim to propose a more general framework that can be easily combined with existing noise reduction methods or pre-trained language models.",
"Methods that combine information from Knowledge Bases in the form of pre-trained Knowledge Graph (KG) embeddings have been particularly effective in DSRE.",
"This is expected since they capture broad associations between entities, thus assisting the detection of facts.",
"Existing approaches either encourage explicit agreement between sentenceand KB-level classification decisions (Weston et al., 2013; Xu and Barbosa, 2019), minimise the distance between KB pairs and sentence embeddings (Wang et al., 2018) or directly incorporate KB embeddings into the training process in the form of attention queries (Han et al., 2018; She et al., 2018; Hu et al., 2019).",
"Although these signals are beneficial, direct usage of KB embeddings into the model often requires explicit KB representations of entities and relations, leading to poor generalisation to unseen examples.",
"In addition, forcing decisions between KB and text to be the same makes the connection between context-agnostic (from the KB) and context-aware (from sentences) pairs rigid, as they often express different things.",
"Variational Autoencoders ( VAE s) (Kingma and Welling, 2013) are latent variable encoder-decoder models that parameterise posterior distributions using neural networks.",
"As such, they learn an effective latent space which can be easily manipulated.",
"Sentence reconstruction via encoder-decoder networks helps sentence expressivity by learning semantic or syntactic similarities in the sentence space.",
"On the other hand, signals from a KB can assist detection of factual relations.",
"We aim to combine these two using a VAE together with a bag-level relation classifier.",
"We then either force each sentence's latent code to be close to the Normal distribution (Bowman et al., 2016), or to a prior distribution obtained from KB embeddings.",
"This latent code is employed into sentence representations for classification and is responsible for sentence reconstruction.",
"As it is influenced by the prior we essentially inject signals from the KB to the target task.",
"In addition, sentence reconstruction learns to preserve elements that are useful for the bag relation.",
"To the best of our knowledge, this is the first attempt to combine a VAE with a bag-level classifier for DSRE.",
"Finally, there are methods for DSRE that follow a rather flawed evaluation setting, where several test pairs are included in the training set.",
"Under this setting, the generalisability of such methods can be exaggerated.",
"We test these approaches under data without overlaps and find that their performance is severely deprecated.",
"With this comparison, we aim to promote evaluation on the amended version of existing DSRE data that can prevent memori-sation of test pair relations.",
"Propose a multi-task learning setting for DSRE.",
"Our results suggest that combination of both bag classification and bag reconstruction improves the target task.",
"Propose a probabilistic model to make the space of sentence representations resemble that of a KB, promoting interpretability.",
"Compare existing approaches on data without train-test pair overlaps to enforce fairer comparison between models.",
"In DSRE, the bag setting is typically adopted.",
"A model's input is a pair of named entities e 1 , e 2 (mapped to a Knowledge Base), and a bag of sentences B = { s 1 , s 2 , . . . , s n } , where the pair occurs, retrieved from a raw corpus.",
"The goal of the task is to identify the relation(s), from a pre-defined set R , that the two entities share, based on the sentences in the bag B .",
"Since each pair can share multiple relations at the same time, the task is considered a multi-label classification problem.",
"Our proposed approach is illustrated in Figure 2. The main goal is to create a joint learning setting where a bag of sentences is encoded and reconstructed and, at the same time, the bag representation is used to predict relation(s) shared between two given entities.",
"The architecture receives as input a bag of sentences for a given pair and outputs",
"(i) predicted relations for the pair and",
"(ii) the reconstructed sentences in the bag.",
"The two outputs are produced by two branches: the left branch, corresponding to bag classification and the right branch, corresponding to bag reconstruction.",
"Both branches start from a shared encoder and they communicate via the latent code of a VAE that is responsible for the information used in the representation and reconstruction of each sentence in the bag.",
"Naturally, both branches have an effect on one another during training.",
"Autoencoders (Rumelhart et al., 1986) are encoder-decoder neural networks that are trained in an unsupervised manner, i.e., to reconstruct their input",
"(e.g. a sentence).",
"They learn an informative representation of the input into a dense and smaller feature vector, namely the latent code.",
"This intermediate representation is then used to fully reconstruct the original input.",
"Variational Autoencoders ( VAE ) (Kingma and Welling, 2013) offer better generalisation capabilities compared to the former by sampling the features of the latent code from a prior distribution that we assume to be similar to the distribution of the data.",
"We form the input of the network similarly to previous work.",
"Each sentence in the input bag is transformed into a sequence of vectors.",
"Words and positions are mapped into real-valued vectors via word embedding E ( w ) and position embedding layers E ( p ) , similarly to Lin et al. (2016).",
"The concatenation of word ( w ) and position ( p ) embeddings x t = [ w t ; p ( e 1 ) t ; p ( e 2 ) t ] forms the representation of each word in the input sentence.",
"A Bidirectional Long-Short Term Memory (BiLSTM) network (Hochreiter and Schmidhuber, 1997) acts as the encoder, producing contextualised representations for each word.",
"The representations of the left-to-right and right-to-left passes of the BiLSTM are summed to produce the output representation of each word t , o t = o t + o t , as well as the representations of the last hidden h = h + h and cell states c = c + c of the input sentence.",
"We use the last hidden and cell states of each sentence s to construct the parameters of a posterior distribution q ( z | s ) using two linear layers, = W [ h ; c ] + b , 2 = W [ h ; c ] + b , (1) where and 2 are the parameters of a multivariate Gaussian, representing the feature space of the sentence.",
"This distribution is approximated via a latent code z , using the reparameterisation trick (Kingma and Welling, 2013) to enable back-propagation, as follows: z = + (cid:12) (cid:15) , where (cid:15) N ( 0 , I ) .",
"(2) This trick essentially forms the posterior as a function of the normal distribution.",
"The decoder network is a uni-directional LSTM network, that reconstructs each sentence in the input bag.",
"The input is formed in two steps.",
"Firstly, the latent code z is given as the initial hidden state of the decoder h (cid:48) 0 via a linear layer transformation.",
"Secondly, the same latent code is concatenated with the representation of each word w t in the input sequence of the decoder.",
"A percentage of words in the decoder's input is randomly replaced by the UNK word to force the decoder to rely on the latent code for word prediction, similar to Bowman et al. (2016).",
"The optimisation objective of the VAE , namely Evidence Lower BOund (ELBO), is the combination of two losses.",
"The first is the reconstruction loss that corresponds to the cross entropy between the actual sentence s and its reconstruction s .",
"The second is the Kullback-Leibler divergence ( DKL ) between a prior distribution p ( z ) , which the latent code is assumed to follow, and the posterior q ( z | h ) , which the decoder produces, LELBO = E z q ( z | h ) [log( p ( h | z ))] DKL ( q ( z | h ) || p ( z )) (4) The first loss is responsible for the accurate reconstruction of each word in the input, while the second acts as a regularisation term that encourages the posterior of each sentence to be close to the prior.",
"Typically, an additional parameter is introduced in front of the DKL to overcome KL vanishing, a phenomenon where the posterior collapses to the prior and the VAE essentially behaves as a standard autoencoder (Bowman et al., 2016).",
"Moving on to the left branch of Figure 2, in order to represent a bag we first need to represent each sentence inside it.",
"We realise this using information produced by the VAE as follows.",
"Given the contextualised output of the encoder o , we construct entity representations e 1 and e 2 for a given pair in a sentence by averaging the word representations included in each entity.",
"A sentence representation s is formed as follows: e i = 1 | e i | (cid:88) k e i o k , s = W v [ z ; e 1 ; e 2 ] , (5) where | e i | corresponds to the number of words inside the mention span of entity e i and z is the latent code of the sentence that was produced by the VAE , as described in Equation (2).",
"In order to form a unified bag representation B for a pair, we adopt the popular selective attention approach introduced by Lin et al. (2016).",
"In particular, we first map relations into real-valued vectors, via a relation embedding layer E ( r ) .",
"Each relation embedding is then used as a query over the sentences in the bag, resulting in | R | bag representations for each pair, a ( s i ) r = exp ( s (cid:62) i r ) (cid:80) j B exp ( s (cid:62) j r ) , B r = | B | (cid:88) i =1 a ( s i ) r s i , (6) where r is the embedding associated with relation r , s i is the representation of sentence s i B , a ( s i ) r is the weight of sentence s i with relation r and B r is the final bag representation for relation r .",
"During classification, we select the probability of predicting a relation category r , using the bag representation that was constructed when the respective relation embedding r was the query.",
"Binary cross entropy loss is applied on the resulting predictions, p ( r = 1 | B ) = ( W c B r + b c ) , LBCE = (cid:88) r y r log p ( r | B ) + (1 y r ) log(1 p ( r | B )) , (7) where W c and b c are learned parameters of the classifier, is the sigmoid activation function, p ( r | B ) is the probability associated with relation r given a bag B and y r is the ground truth for this relation with possible values 1 or 0.",
"In the scenario where no KB information is incorporated into the model, we simply assume that the prior distribution of the latent code p ( z ) is a standard Gaussian with zero mean and identity",
"covariance N ( 0 , I ) .",
"To integrate information about the nature of triples into the bag-level classifier, we create KB-guided priors as an alternative to the standard Gaussian.",
"In particular, we train a link prediction model, such as TransE (Bordes et al., 2013), on a subset of the Knowledge Graph that was used to originally create the dataset.",
"Using the link prediction model, we obtain entity embeddings for the subset KB.",
"A KB-guided prior can thus be constructed for each pair, as another Gaussian distribution with mean value equal to the KB pair representation and covariance as the identity matrix, p ( z ) N ( KB , I ) , with KB = e h e t , (8) where e h and e t are the vectors for entities e head and e tail as resulted from training a link prediction algorithm on a KB.",
"The link prediction algorithm is trained to make representations of pairs expressing the same relations to be close in space.",
"Hence, by using KB priors we try to force the distribution of sentences in a bag to follow the distribution of the pair in the KB.",
"If one of the pair entities does not exist in the KB subset, the mean vector of the pair's prior will be zero, resulting in a standard Gaussian prior.",
"Finally, KB priors are only used during training.",
"Consequently, the model does not use any direct KB information during inference.",
"We train jointly bag classification and sentence reconstruction.",
"The final optimisation objective is formed as, L = LBCE + (1 ) LELBO , (9) where corresponds to a weight in [0 , 1] .",
"We weigh the classification loss more than the ELBO to allow the model to better fit the target task.",
"We experiment with the following two datasets: NYT 10.",
"The widely used New York Times dataset (Riedel et al., 2010) contains 53 relation categories including a negative relation (NA) indicating no relation between two entities.",
"We use the version of the data provided by the OpenNRE framework (Han et al., 2019), which removes overlapping pairs between train and test data.",
"The dataset statistics are shown in Table 1. Additional information can be found in Appendix A.1.",
"For the choice of the Knowledge Base, we use a subset of Freebase 2 that includes 3 million entities with the most connections, similar to Xu and Barbosa (2019).",
"For all pairs appearing in the test set of NYT 10 (both positive and negative), we remove all links in the subset of Freebase to ensure that we will not memorise any relations between them (Weston et al., 2013).",
"The resulting KB contains approximately 24 million triples.",
"WIKIDISTANT .",
"The WikiDistant dataset is almost double the size of the NYT 10 and contains 454 target relation categories, including the negative relation.",
"It was recently introduced by Han et al. (2020) as a cleaner and more well structured bag-level dataset compared to NYT 10, with fewer negative instances.",
"For the Knowledge Base, we use the version of Wikidata 3 provided by Wang et al. (2019b) (in particular the transductive split 4 ), containing approximately 5 million entities.",
"Similarly to Freebase, we remove all links between pairs in the test set from the resulting KB, which contains approximately 20 million triples after pruning.",
"Following prior work, we consider the Precision-Recall Area Under the Curve (AUC) as the primary",
"metric for both datasets.",
"We additionally report Precision at N (P@N), that measures the percentage of correct classifications for the top N most confident predictions.",
"To obtain the KB priors, we train TransE on the subsets of Freebase and Wikidata using the implementation of the DGL-KE toolkit (Zheng et al., 2020) for 500K steps and a dimensionality equal to the dimension of the latent code.",
"The main model was implemented with PyTorch (Paszke et al., 2019).",
"We use the Adam (Kingma and Ba, 2014) optimiser with learning rate 0 .",
"001 .",
"KL logistic annealing is incorporated only in the case where the prior is the Normal distribution to avoid KL vanishing (Bow-man et al., 2016).",
"Early stopping is used to determine the best epoch based on the AUC score on the validation set.",
"Words in the vocabulary are initialised with pre-trained, 50 -dimensional GloVe embeddings (Pennington et al., 2014).",
"We limit the vocabulary size to the top 40K and 50K most frequent words for NYT 10 and WIKIDISTANT , respectively.",
"To enable fast training, we use Adaptive Softmax (Grave et al., 2017).",
"The maximum sentence length is restricted to 50 for NYT 10 and 30 words for WIKIDISTANT .",
"Each bag in the training set is allowed to contain maximum 500 sentences selected randomly.",
"For prediction on the validation and test sets, all sentences (with full length) are used.",
"In this work we compare with various models applied on the NYT 10 dataset: PCNN-ATT (Lin et al., 2016) is one of the first neural models that uses a PCNN encoder and selective attention over the instances in a bag, similar to our approach.",
"RESIDE (Vashishth et al., 2018), utilises syntactic, entity and relation type information as additional input to the network to assist classification.",
"JOINT Method Encoder NYT 520K NYT 570K AUC ( % ) P@N ( % ) AUC ( % ) P@N ( % ) 100 200 300 100 200 300 Baseline BiLSTM 34.94 74.0 67.5 67.0 43.59 84.0 77.0 75.3 + p ( z ) N (0 ,I ) 38.59 74.0 74.5 71.6 44.64 80.0 76.0 75.6 + p ( z ) N ( KB ,I ) 42.89 83.0 75.5 73.0 45.52 81.0 77.5 73.6 PCNN-ATT (Lin et al., 2016) PCNN 32.66 71.0 67.5 62.6 36.25 76.0 72.5 64.0 JOINT NRE (Han et al., 2018) CNN 30.62 60.0 57.0 55.3 40.15 75.8 -68.0 RESIDE (Vashishth et al., 2018) BiGRU 35.80 80.0 69.0 65.3 41.60 84.0 78.5 75.6 INTRA-INTER BAG (Ye and Ling, 2019) PCNN 34.41 82.0 74.0 69.0 42.20 91.8 84.0 78.7 DISTRE (Alt et al., 2019) GPT-2 42.20 68.0 67.0 65.3 --Table 2: Performance comparison between different methods on the NYT 10 test set for the two different versions of the dataset.",
"NRE (Han et al., 2018) jointly trains a textual relation extraction component and a link prediction component by sharing attention query vectors among the two.",
"INTRA-INTER BAG (Ye and Ling, 2019) applies two attention mechanisms inside and across bags to enforce similarity between bags that share the same relations.",
"DISTRE (Alt et al., 2019) uses a pre-trained Transformer model, instead of a recurrent or convolutional encoder, fine-tuned on the NYT 10 dataset.",
"We report results on both the filtered data (520K) that do not contain train-test pair overlaps, as well as the non-filtered version (570K) to better compare with prior work 5 .",
"With the exception of DISTRE , all prior approaches were originally applied on the 570K version.",
"Hence, performance of prior work on the 520K version corresponds to re-runs of existing implementations (via their open-source code).",
"For the non-filtered version, results are taken from the respective publications 6 .",
"ver-For the WIKIDISTANT dataset, we compare with the PCNN-ATT model as this is the only model currently applied on this data (Han et al., 2020).",
"We also compare our proposed approach with two additional baselines.",
"The first baseline model (Base-line) does not use the VAE component at all.",
"In this case the sentence representation is simply created using the last hidden state of the encoder, s = [ h ; e 1 ; e 2 ] , instead of the latent code.",
"The second model ( p ( z ) N (0 , I ) ) incorporates reconstruction with a standard Gaussian prior and the final model ( p ( z ) N ( KB , I ) ) corresponds to our proposed model with KB priors.",
"The results of the proposed approach versus existing methods on the NYT 10 dataset are shown in Table 2. The addition of reconstruction further improves performance by 3.6 percentage points (pp), while KB priors offer an additional of 4.3pp.",
"Compared with DISTRE , our model achieves comparable performance, even if it does not use a pre-trained language model.",
"As we observe from the precision-recall curve in Figure 3, our model is competitive with DISTRE for up to 35% of the recall range but for the tail of the distribution a pre-trained language model has better results.",
"This can be attributed to the world knowledge it has obtained via pre-training, which is much more vast than a KB subset.",
"Overall, for the reduced version of the dataset VAE with KB-guided priors surpasses the entire recall range of all previous methods.",
"For the 570K version, our model is superior to other approaches in terms of AUC score, even for the baseline.",
"We speculate this is because we incorpo-sions using the OpenNRE toolkit.",
"rate argument representations into the bag representation.",
"As a result, overlapping pairs between training and test set have learnt strong argument representations.",
"Regarding the results on the WIKIDISTANT dataset in Table 3, once again we observe that reconstruction helps improve performance.",
"However, it appears that KB priors have a negative effect.",
"We find that in the NYT 10 dataset 96% of the training pairs are associated with a prior.",
"Instead, this portion is only 72% for WIKIDISTANT .",
"The reason for this discrepancy could be the reduced coverage that potentially causes a confusion between the two signals 7 .",
"To test this hypothesis, we re-run our models on a subset of the training data, removing pairs that do not have a KB prior.",
"As observed in the second half of Table 3, priors do seem to have a positive impact under this setting, indicating the importance of high coverage in prior-associated pairs.",
"We use this setting for the remainder of the paper.",
"We then check whether the latent space has indeed learned some information about the KB triples, by visualising the t-SNE plots of the priors, i.e. the KB vectors as resulted from training TransE (Equation (8)) and the posteriors, i.e. the vectors as resulted from the VAE encoder (Equation (1)).",
"Figure 4a illustrates the space of the priors in Freebase for the most frequent relation categories in the NYT 10 training set 8 .",
"As it can be observed, 7 If a pair does not have a KB prior it will be assigned the Normal prior instead.",
"8 We plot t-SNEs for the training set instead of the valida-tion/test sets because the WIKIDISTANT validation set contains too few pairs belonging to the top-10 categories.",
"NYT 10 validation set t-SNE can be found in the Appendix A.5 the separation is obvious for most categories, with a few overlaps.",
"Relations place of birth , place lived and place of death appear to reside in the same region.",
"This is expected as these relations can be shared by a pair simultaneously.",
"Another overlap is identified for contains , administrative divisions and capital .",
"Again, these are similar relations found between certain entity types (e.g. location, province, city).",
"Figure 4b shows the t-SNE plot for a collection of latent vectors (random selection of 2 sentences in a positive bag).",
"The space is very similar to that of the KB and the same overlapping regions are clearly observed.",
"A difference is that it appears to be less compact, as not all sentences in a bag express the exact same relation.",
"Similar observations stand for Wikidata priors, as shown in Figure 4c.",
"By looking at the space of the posteriors, we can see that although for most categories separation is achieved, there are 2 relations that are not so well separated in the posterior space.",
"We find that has part (cyan) and part of (or-ange) are opposite relations, that TransE can effectively learn thanks to its properties.",
"However, the model appears to not be able to fully separate the two.",
"These relations are expressed in the same manner, by only changing the order of the arguments.",
"As there is no restriction regarding the argument order in our model directionality can sometimes be an issue.",
"Finally, in order to check how the prior constraints affect sentence reconstruction, we illustrate reconstructions of sentences in the validation set of the NYT 10 in Table 4 and WIKIDISTANT in Table 5.",
"In detail, we give the input sentence to the network and employ greedy decoding using either the mean of the latent code or a random sample.",
"Manual inspection of reconstruction reveals that KB-priors generate longer sentences than the Normal prior by repeating several words (especially the UNK).",
"In fact, VAE with KB-priors fails to generate plausible and grammatical examples for NYT 10, as shown in Table 4.",
"Instead, reconstructions for WIKIDISTANT are slightly better, due to the less noisy nature of the dataset.",
"In both cases, we see that the reconstructions contain words that are useful for the target relation, e.g. words that refer to places such as new york , new jersey for the relation contains between bay village and ohio , or sport-related terms (football, team, league) for the statistical leader relationship between wayne rooney and england national team .",
"Distantly Supervised RE.",
"Methods developed for DSRE have been around for a long time, building upon the idea of distant supervision (Mintz et al., 2009) with the widely used NYT 10 corpus by Riedel et al. (2010).",
"Methods investigating this problem can be divided into several categories.",
"Initial approaches were mostly graphical models, adopted to perform multi-instance learning (Riedel et al., 2010), sentential evaluation (Hoffmann et al., 2011; Bai and Ritter, 2019) or multi-instance learning and multi-label classification (Surdeanu et al., 2012).",
"Subsequent approaches utilised neural models, with the approach of Zeng et al. (2015) introducing Piecewise Convolutional Neural Networks (PCNN) into the task.",
"Later approaches focused on noise reduction via selection of informative instances using either soft constraints, i.e., attention mechanisms (Lin et al., 2016; Ye and Ling, 2019; Yuan et al., 2019), or hard constraints by explicitly selecting non-noisy instances with reinforcement (Feng et al., 2018; Qin et al., 2018b,a; Wu et al., 2019; Yang et al., 2019) and curriculum learning (Huang and Du, 2019).",
"Noise at the word level was addressed in Liu et al. (2018a) via sub-tree parsing on sentences.",
"Adversarial training has been shown to improve DSRE in Wu et al. (2017), while additional unlabelled examples were exploited to assist classification with Generative Adversarial Networks (GAN) (Goodfellow et al., 2014) in Li et al. (2019).",
"Recent methods use additional information from external resources such as entity types and relations (Vashishth et al., 2018), entity descriptors (Ji et al., 2017; She et al., 2018; Hu et al., 2019) or Knowledge Bases (Weston et al., 2013; Xu and Barbosa, 2019; Li et al., 2020b).",
"Sequence-to-Sequence Methods.",
"Autoencoders and variational autoencoders have been investigated lately for relation extraction, primarily for detection of relations between entity mentions in sentences.",
"Marcheggiani and Titov (2016) proposed discrete-state VAE s for link prediction, reconstructing one of the two entities of a pair at a time.",
"Ma et al. (2019) investigated conditional VAE s for sentence-level relation extraction, showing that they can generate relation-specific sentences.",
"Our overall approach shares similarities with this work since we also use VAE s for RE, though in a bag rather than a sentence-level setting.",
"VAE s have also been investigated for RE in the biomedical domain (Zhang and Lu, 2019), where additional non-labelled examples were incorporated to assist classification.",
"This work also has commonalities with our work but the major difference is that the former uses two different encoders while we use only one, shared among bag classification and bag reconstruction.",
"Other SEQ 2 SEQ methods treat RE as a sequence generation task.",
"Encoder-decoder networks were proposed for joint extraction of entities and relations (Trisedya et al., 2019; Nayak and Ng, 2020), generation of triples from sequences (Liu et al., 2018b) or generation of sequences from triples (Trisedya et al., 2018; Zhu et al., 2019).",
"VAE Priors.",
"Different types of prior distributions have been proposed for VAE s, such as the Vamp-Prior (Tomczak and Welling, 2018), Gaussian mixture priors (Dilokthanakul et al., 2016), Learned Accept/Reject Sampling (LARs) priors (Bauer and Mnih, 2019), non-parametric priors (Goyal et al., 2017) and others.",
"User-specific priors have been used in collaborative filtering for item recommendation (Karamanolakis et al., 2018), while topic-guided priors were employed for generation of topic-specific sentences (Wang et al., 2019a).",
"In our approach we investigate how to incorporate KB-oriented Gaussian priors in DSRE using a link prediction model to parameterise their mean vector.",
"We proposed a probabilistic approach for distantly supervised relation extraction, which incorporates",
"context agnostic knowledge base triples information as latent signals into context aware bag-level entity pairs.",
"Our method is based on a variational autoencoder that is trained jointly with a relation classifier.",
"KB information via a link prediction model is used in the form of prior distributions on the VAE for each pair.",
"The proposed approach brings close sentences that contain the same KB pairs and it does not require any external information during inference time.",
"Experimental results suggest that jointly reconstructing sentences with relation classification is helpful for distantly supervised RE and KB priors further boost performance.",
"Analysis of the generated latent representations showed that we can indeed manipulate the space of sentences to match the space of KB triples, while reconstruction is enforced to keep topic-related terms.",
"Future work will target experimentation with different link prediction models and handling of noninformative sentences.",
"Finally, incorporating large pre-trained language models (LMs) into VAEs is a recent and promising study (Li et al., 2020a) which can be combined with KBs as injecting such information into LMs has been shown to further improve their performance (Peters et al., 2019).",
"This research was supported by BBSRC Japan Partnering Award [Grant ID: BB/P025684/1] and based on results obtained from a project, JPNP20006, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).",
"The authors would like to thank the anonymous reviewers for their instructive comments."
] | [
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"other",
"method",
"other",
"method",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other"
] |
[
"Integrating extracted knowledge from the Web to knowledge graphs (KGs) can facilitate tasks like question answering.",
"We study relation integration that aims to align free-text relations in subject-relation-object extractions to relations in a target KG.",
"To address the challenge that free-text relations are ambiguous, previous methods exploit neighbor entities and relations for additional context.",
"However, the predictions are made independently, which can be mutually inconsistent.",
"We propose a two-stage Co llective R elation I ntegration (CoRI) model, where the first stage independently makes candidate predictions, and the second stage employs a collective model that accesses all candidate predictions to make globally coherent predictions.",
"We further improve the collective model with augmented data from the portion of the target KG that is otherwise unused.",
"Experiment results on two datasets show that CoRI can significantly outperform the baselines, improving AUC from .677 to .748 and from .716 to .780, respectively.",
"With its large volume, the Web has been a major resource for knowledge extraction.",
"Open information extraction (open IE; Sekine 2006; Banko et al. 2007) is a prominent approach that harvests subject-relation-object extractions in free text without assuming a predefined set of relations.",
"One way to empower downstream applications like question answering is to integrate those free-text extractions into a knowledge graph (KG), e.g., Freebase.",
"Relation integration is the first step to integrate those extractions, where their free-text relations ( i.e., source relations ) are normalized to relations in the target KG ( i.e., target relations ).",
"Only after relation integration can entity linking proceed to resolve the This work was performed while at Amazon.",
"Local Approaches.",
"Relation integration has been studied by the natural language processing (NLP) community.",
"With exact matching in literal form between entity names in the source graph and target KG, previous methods obtain parallel data , i.e., common entity pairs, between the two graphs as in Fig.",
"1. Features of the entity pairs ( e.g., Malia-Barack) in the source graph and their relations in the target KG ( e.g., father ) are used to train models to predict target relations for future extractions.",
"A common challenge is the ambiguity of source relations, e.g., parent may correspond to father or mother in different contexts.",
"Previous methods exploited contextual features including embeddings of seen entities ( e.g., Malia; Riedel et al. 2013), middle relations between ( e.g., parent; Riedel et al. 2013; Toutanova et al. 2015; Verga et al. 2017, 2016; Weston et al. 2013), and neighbor relations around the entity pair ( e.g., gender; Zhang et al. 2019).",
"Assuming rich contexts to address the ambiguity challenge, previous methods may fall short under the evolving and incomplete nature of the source Methods Middle No entity Neighbor Collective relation param.",
"graph.",
"For example, in the lower part of Fig. 1, emerging entities may come from new extractions with sparse contextual information.",
"For the pair Nell-Marie, a conventional model learned on the parallel data may have neither seen entities nor neighborhood information ( e.g., gender) to depend on, thus failing to disambiguate parent and wrongly predicting father .",
"Due to the local nature of previous approaches, i.e., predictions for different entity pairs are made independently of each other, the model is unaware that Nell has two fathers in the final predictions.",
"Such predictions are incoherent in common sense that a person is more likely to have one father and one mother, which is indicated by the graph structure around Malia in the target KG part of the parallel data.",
"To alleviate the incoherent prediction issue of local approaches, we propose Co llective R elation I ntegration (CoRI) that exploits the dependency of predictions between adjacent entity pairs to enforce global coherence.",
"Specifically, we follow two stages, i.e., candidate generation and collective inference .",
"In candidate generation, we simply use a local model to make independent predictions as candidates, e.g., father for all the three pairs in the lower part of Fig.",
"1. In collective inference, we employ a collective model that is aware of the common substructures of the target graph, e.g., Malia .",
"The collective model makes predictions by not only taking as input all contextual features to the local model but also the candidate predictions of the current and all neighbor pairs.",
"For the pair Nell-Marie, the collective model will have access to the candidate prediction father of Nell-Burton, which helps flip its final prediction to the correct mother .",
"Tab.",
"1 summarizes CoRI and representative previous work from four aspects.",
"To the best of our knowledge, CoRI is the first to collectively perform relation integration rather than locally.",
"predictions, the collective model needs to be trained to encode common structures of the target KG, e.g., Malia having only one father/mother in the parallel data of Fig.",
"1. To this end, we train the collective model in a stacked manner (Wolpert, 1992).",
"We first train the first-stage local model on the parallel data, then train the second-stage collective model by conditioning on the candidate predictions of neighbor entity pairs from the first stage ( e.g., father for Malia-Barrack) to make globally consistent predictions ( e.g., mother for Malia-Michelle).",
"Parallel Data Augmentation.",
"The parallel data may be bounded by the low recall of exact name matching or the limited extractions generated by open IE systems.",
"We observe that, even without counterpart extractions, the unmatched part of the target graph (as in Fig. 1) may also have rich common structures to guide the training of the collective model.",
"To this end, we propose augmenting the parallel data by sampling subgraphs from the unmatched KG and creating pseudo parallel data by synthesizing their extractions, so the collective model can benefit from additional training data characterizing the desired global coherence.",
"To summarize, our contributions are three-fold: (1) We propose CoRI, a two-stage framework that improves state-of-the-art methods by making collective predictions with global coherence.",
"(2) We propose using the unmatched target KG to augment the training data.",
"(3) Experimental results on two datasets demonstrate the superiority of our approaches, improving AUC from .677 to .748 and from .716 to .780, respectively.",
"In this section, we first formulate the task of relation integration, then describe local methods by exemplifying with the state-of-the-art approach OpenKI (Zhang et al., 2019).",
"We treat subject-relation-object extractions from open IE systems as a source graph K ( E , R ) = { ( s, r, o ) | s, o E , r R} , where E denotes extracted textual entities, e.g., Barack Obama, and R denotes extracted source relations, e.g., parent.",
"We denote by ( s, o ) a source entity pair.",
"For ( s, o ) , K s,o = { r | ( s, r, o ) K} denotes all source relations between them.",
"Similarly, K r = { s, o | ( s, r, o ) K} denotes all entity pairs with relation r in between.",
"Definition 1 ( Relation Integration ) .",
"Given a source graph K and a target KG K (cid:48) ( E (cid:48) , R (cid:48) ) with target entities E (cid:48) and target relations R (cid:48) , the task of relation integration is to predict all applicable target relations for each extracted entity pair in KR : KR R (cid:48) , where ( s, r (cid:48) , o ) is an integrated extraction indicating that a target relation r (cid:48) holds for ( s, o ) .",
"To train relation integration models, all methods employ parallel data formalized as follow: Definition 2 ( Parallel Data ) .",
"Parallel data are common entity pairs shared between KR and K (cid:48)R (cid:48) and their ground truth target relations in K (cid:48) : T = {(cid:104) ( s, o ) , K (cid:48) s,o (cid:105) | ( s, o ) KR K (cid:48)R (cid:48) } .",
"For example, (cid:104) ( Malia , Barack ) , { father }(cid:105) is an instance of parallel data in Fig.",
"1. To obtain parallel data, a widely used approach is to find entities shared by E and E (cid:48) by exact name matching, then generate common entity pairs and their ground truth.",
"Previous local methods score potential integrated extractions by assuming their independence:",
"where is the parameters of the local model.",
"One representative local model achieving state-of-the-art performance is OpenKI (Zhang et al., 2019).",
"It encodes the neighborhood of ( s, o ) in K by grouping and averaging embeddings of source relations in three parts.",
"Let K s, be the set of source relations between s and neighbor entities other than o , and similarly for K ,o .",
"OpenKI represents ( s, o ) by concatenating the three averaged embeddings into a local representation t l : t l = [ A ( K s,o ); A ( K s, ); A ( K ,o )] , (2) where l stands for local, and A ( . ) takes a set of relations and outputs the average of their embeddings.",
"where MLP l is a multi-layer perceptron and the sigmoid function.",
"Given a parallel data T = Nell Billy father parent father father Marie Nell parent Billy Burton father Marie input to the first stage input to the second stage source relation target relation entity parent parent father Burton Figure 2: Input of both stages on the Nell-Marie case.",
"{(cid:104) ( s, o ) , K (cid:48) s,o (cid:105)} , the loss function per training example trades between maximizing the probabilities of positive target relations and minimizing those of negative target relations: L (cid:0) ( s, o ) , K (cid:48) s,o (cid:1) = (cid:80) r (cid:48) K (cid:48) s,o log P ( s, r (cid:48) , o | K ) |K (cid:48) s,o | + (cid:80) r (cid:48) R (cid:48) \\K (cid:48) s,o log P ( s, r (cid:48) , o | K ) |R (cid:48) \\ K (cid:48) s,o | , (4) where is a hyperparameter to account for the imbanlance between positive and negative relations, because the latter often outnumber the former.",
"The final loss is the sum of all examples.",
"As discussed in 1, the drawback of local methods is that predictions of different entity pairs are independently made.",
"Neglecting their dependency may lead to predictions inconsistent with each other.",
"To address the issue, we propose a collective approach CoRI, which achieves collective relation integration via two stages: candidate generation and collective inference.",
"In this section, we demonstrate the input and output of the two stages, as well as our current implementations.",
"As mentioned in 1.1, candidate generation's responsibility is to provide candidate predictions to the collective inference stage.",
"Formally, candidate predictions l ( l means local) are generated by executing a local model on the source graph K : l = argmax P ( | K ) .",
"The candidate predictions in l may be partially wrong, but the other correct ones can help adjust",
"wrong predictions of their adjacent entity pairs in the collective inference stage, under the guidance of the collective model.",
"For example, in the upper part of Fig. 2, we have a source graph K with three entity pairs.",
"The input to candidate generation is the entire K .",
"After applying the local model (OpenKI in our case), we have three additional edges as the output l in the lower part of Fig.",
"2. Note that the candidate prediction father for Nell-Marie (denoted by black outline) is incorrect due to insufficient information in its neighborhood in K , i.e., both the relations in between of and around the entity pair (denoted by solid edges) are ambiguous parents.",
"Fortunately, the entity pair Nell-Burton is relatively easy for the local model to predict as father because it can leverage the neighbor relation father between Billy-Burton.",
"Such correct candidate predictions are included in l , provided to the collective inference stage as additional signals for later correction of the wrong predictions such as father for Nell-Marie.",
"Collective inference's responsibility is to encode the structures of the target graph and use such information to refine the candidate predictions l by enforcing coherence among them.",
"To this end, a collective model P (with parameters ) takes both the source graph K and the candidate predictions l as input, and outputs the final predictions : P ( | K ) = P ( | K , l ) .",
"In the Nell-Marie case of Fig. 2, when making the final prediction, its own candidate predictions and those of the neighbor entity pairs (solid edges in l of the lower part in Fig. 2) are used to leverage the dependency among them.",
"We concatenate the embeddings of candidate predictions to the local representation t l obtained in the first stage, and represent each entity pair as follow: t c = [ t l ; A ( ls,o ); A ( ls, ); A ( l ,o )] , (7) where c means collective.",
"ls,o includes candidate target relations between s and o , and similarly for ls, and l ,o .",
"Then we use another multi-layer perceptron MLP c to convert t c to probabilities P ( s, r (cid:48) , o | K , l ) = ( MLP c ( t c )) r (cid:48) , (8) and minimize the loss function for P similar to that of the local model P in Eq.",
"4.",
"According to Eq.",
"6, we need l as features to train the collective model P .",
"This is to ensure that P captures the dependencies among target relations.",
"One may ask why we do not directly use ground truth K (cid:48) instead of predictions l .",
"At test time, we can only use target relations predicted by P as input to P because the ground truth target relations of neighbor entity pairs might not be available.",
"If we train P using the ground truth, there will be a discrepancy between training and testing, potentially hurting the performance.",
"Specifically, we split the training set T into T folds.",
"We generate l by rotating and unioning a temporary local model's predictions on a held-out fold, where the temporary model is trained on the other folds.",
"Then we train P on the parallel data T with l .",
"In this manner, we can use the full dataset to optimize the collective model while avoiding generating candidates on the training data of the local model, which leads to overfitting.",
"The detailed training procedure is given in Alg.",
"1. 4 Data Augmentation w/ Unmatched KG As in Def.",
"2, the volume of parallel data is limited by the number of shared entity pairs KR K (cid:48)R (cid:48) of the two graphs.",
"In Fig. 1, the unmatched part of the target KG, containing entity pairs without extraction counterparts ( i.e., K (cid:48)R (cid:48) \\ KR ) and their target relations, can also indicate common substructures of the target KG, and guide the training of the collective model.",
"To this end, we propose leveraging unmatched KG to generate pseudo parallel data to augment the limited training data.",
"Synthesizing Pseudo Extractions.",
"To leverage the unmatched KG, we need to synthesize pseudo extractions for the target entities and relations to add to K as features.",
"Since we do not use entity-specific parameters, we only synthesize source relations like parent, and keep the target entities Parallel data parent mother father mother father gender gender Barack Michelle male Malia male Malia Michelle Barack parent Aligned Augmentation parent parent extractions KG Unmatched KG Pseudo extractions Pseudo parallel data Figure 3: Illustration of parallel data augmentation.",
"unchanged, as illustrated in Fig. 3.",
"Specifically, for each subject-relation-object tuple ( s (cid:48) , r (cid:48) , o (cid:48) ) in the unmatched KG, we keep s (cid:48) and o (cid:48) unchanged, and synthesize source relations r by sampling from: P ( r | r (cid:48) ) = |K r K (cid:48) r (cid:48) | |K (cid:48) r (cid:48) | , (9) i.e., the conditional probability of observing r given r (cid:48) based on co-occurrences in the parallel data.",
"|K r K (cid:48) r (cid:48) | is the number of entity pairs with both r and r (cid:48) in between, and |K (cid:48) r (cid:48) | is the number of entity pairs with r (cid:48) in between.",
"In this way, we obtain a pseudo extraction ( s, r, o ) , as detailed in Alg.",
"2 Pseudo Data Selection.",
"Definition 3 ( Pseudo Parallel Data ) .",
"Pseudo parallel data T p includes common entity pairs between pseudo extractions K p and the target KG K (cid:48) , associated with their ground truth target relations, i.e., T p = {(cid:104) ( s, o ) , K (cid:48) s,o (cid:105)} | ( s, o ) K p R K (cid:48)R (cid:48) } .",
"To make use of pseudo parallel data T p , the most straightforward way is to use them together with parallel data T to train the collective model P .",
"However, not all substructures in the target graph K (cid:48) are useful for P .",
"For example, when K (cid:48) has other domains irrelevant to the source extraction graph, substructures in those domains may distract P from concentrating on the domains of the source graph.",
"To mitigate this issue, we only use a subset of T p similar to T , as shown by the black-outlined parts in Fig. 3.",
"Specifically, we represent each entity pair ( s, o ) as a virtual document with surrounding target relations K (cid:48) s,o K (cid:48) s, K (cid:48) ,o Algorithm 2: Our augmentation approach.",
"as tokens.",
"For each entity pair from the parallel data T , we use BM25 (Robertson and Zaragoza, 2009) to retrieve its top K most similar entity pairs from T p , and add them to the selected pseudo parallel data T p for training, as detailed in Alg.",
"2. 5 Experimental Settings 5.1 Datasets and Evaluation We use the ReVerb dataset (Fader et al., 2011) as the source graph, and Freebase 1 and Wikidata 2 as the target KGs, respectively.",
"We follow the same name matching approach in Zhang et al. (2019) to obtain parallel data.",
"To simulate real scenarios where models are trained on limited labeled data but applied to a large testing set, we use 20% of entity pairs in the parallel data for training and the other 80% for testing, and there is no overlap.",
"We also compare the performance under other ratios in 6.3.",
"Dataset statistics are listed in Tab.",
"2. Datasets #Train #Test |R| ReVerb + Freebase 12,344 49,629 97,196 ReVerb + Wikidata 8,447 33,849 182,407 Table 2: Dataset statistics.",
"We evaluate by ranking all integrated extractions based on their probabilities, and report area under the curve ( AUC ).",
"Considering real scenarios where we want to integrate as many extractions as possible while keeping a high precision, we also report Recall and F 1 when precision is 0.8, 0.9, or 0.95.",
"We compare the following methods in experiments.",
"Relation Translation is a simple method that maps source relations to target relations with conditional probability P ( r (cid:48) | r ) similar to Eq.",
"9.",
"For an entity pair ( s, o ) , the predicted target relations are { arg max r (cid:48) P ( r (cid:48) | r ) | r K s,o } .",
"Universal Schema (E-model) (Riedel et al., 2013) learns entity and relation embeddings through matrix factorization, which cannot generalize to unseen entities.",
"It is a local model that scores each integrated extraction independently.",
"Rowless Universal Schema (Verga et al., 2017) is a local model which improves over the E-model by eliminating entity-specific parameters, thus generalizing to unseen entities.",
"OpenKI (Zhang et al., 2019) is a local model that addresses the ambiguity of source relations by using neighbor relations for more context.",
"CoRI is our collective two-stage relation integration model trained with Alg.",
"1. CoRI + DA is our model where the training data is augmented by pseudo parallel data with Alg.",
"2. To verify the necessity of retrieval -based pseudo data selection, we also compare with a random DA baseline where we select K random entity pairs.",
"CoRI + KGE is another approach to exploit the unmatched KG with KG embeddings ( KGE ) trained on the entire target KG in an unsupervised manner.",
"We initialize the embeddings of target relations averaged by A ( . ) in Eq.",
"7 with TransE (Bordes et al., 2013) embeddings trained on the target graph.",
"We uniformly use 32-dimension embeddings for all relations, and AdamW (Loshchilov and Hutter, 2019) optimizer with learning rate 0.01 and epsilon 10 -8 .",
"The ratio in Eq.",
"4 is set to 10.",
"We sample at most 30 neighbor source relations to handle entity pairs with too many neighbor relations.",
"We use T = 5 folds in Alg.",
"1 to train our collective model.",
"We retrieve top K = 5 entity pairs in pseudo data selection, adding about 20K and 12K entity pairs to the two datasets in Tab.",
"2, respectively.",
"We use BM25 (Robertson and Zaragoza, 2009) implementation in ElasticSearch 3 in pseudo data selection.",
"We use the KGE released by OpenKE.",
"4 Our model is trained with 32 CPU cores and a single 2080Ti GPU, and it takes 1-2 hours to converge.",
"We aim to answer the following questions: (1) Is CoRI superior to local models?",
"(2) Is CoRI robust w.r.t. varying size of training and testing data?",
"(3) Is unmatched KG useful for CoRI?",
"Is our parallel data augmentation approach the best choice?",
"In Tab.",
"3, we show results comparing all methods on both datasets.",
"Our observations are as follows.",
"Collective inference is beneficial.",
"Among the baselines, OpenKI generally performs best because it leverages neighbor relations besides middle relations between entity pairs, without relying on entity parameters.",
"Even without data augmentation, CoRI outperforms OpenKI by a large margin, improving AUC from .677 to .708 and from .716 to .746 on the two datasets, respectively, which demonstrates the effectiveness of collective inference.",
"Data augmentation further improves the performance.",
"By comparing CoRI with CoRI + DA (retrieval), we observe that data augmentation further improves AUC from .708 to .748 and from .746 to .780, respectively, which indicates that using unmatched KG can effectively augment the training of the collective model.",
"We plot the precision-recall curves of the best three approaches in Fig. 4.",
"It demonstrates the superiority of our methods across the whole spectrum.",
"Generalization on unseen entities is necessary.",
"Among the baselines, the E-model uses entity-specific parameters, hindering it from generalizing to unseen entities and making it less competitive.",
"As shown in Tab.",
"3, both KGE, random, and retrieval-based data augmentation approaches perform better than CoRI (without DA), indicating the effectiveness of using the unmatched KG.",
"Our retrieval-based DA outperforms the random coun-Datasets ReVerb + Freebase ReVerb + Wikidata Metrics AUC Prec = 0.8 Prec = 0.9 Prec = 0.95 AUC Prec = 0.8 Prec = 0.9 Prec = 0.95 Rec F 1 Rec F 1 Rec F 1 Rec F 1 Rec F 1 Rec F 1 Translation .571 .590 .679 .100 .180 .067 .125 .604 .595 .683 .088 .160 .042 .080 E-model .205 .014 .027 .010 .020 .005 .010 .214 ---Rowless .593 .473 .594 .372 .526 .186 .310 .647 .511 .624 .381 .536 .266 .416 OpenKI .677 .553 .654 .449 .599 .314 .472 .716 .605 .689 .511 .652 .407 .570 CoRI .708 .590 .679 .494 .638 .381 .544 .746 .641 .712 .558 .689 .461 .621 + KGE .711 .597 .684 .514 .654 .418 .581 .763 .662 .725 .596 .717 .520 .672 + DA (random) .734 .616 .696 .518 .658 .395 .558 .774 .678 .734 .606 .724 .521 .673 + DA (retrieval) .748 .636 .708 .539 .674 .421 .583 .780 .685 .738 .613 .729 .529 .680 Table 3: Main experimental results.",
"terpart, which confirms the superiority of similarity-based data augmentation in choosing substructures that cover domains relevant to the original parallel data.",
"Our DA approach outperforms KGE, demonstrating the necessity of selectively using the unused KG to avoid discrepancies with the parallel data.",
"Different Numbers of Pseudo Data Entity Pairs.",
"In Fig. 5, we compare the performance of DA w.r.t. different numbers of retrieved entity pairs K .",
"We observe that K =5 yields better performance than K =1.",
"However, further increasing K hurts the performance, which is probably due to pseudo entity pairs with lower similarity to the parallel data causing a domain shift.",
"This validates the necessity of selectively using pseudo parallel data.",
"Due to its collective nature, one may wonder about CoRI's performance w.r.t. other training and testing data sizes.",
"We analyze these factors in this section.",
"Our observations are similar on both datasets, so we only report the results on ReVerb + Freebase.",
"Varying Size of Training Data.",
"In Fig. 6a, we compare CoRI (without DA) with OpenKI by varying the portion of the parallel data for training from 20% (used in our main results in Tab. 3) to 80%.",
"We observe that using more training data improves the performance, as shown by the increasing trends w.r.t. all metrics.",
"Our method outperforms OpenKI in all settings, demonstrating that our method is effective in both highand low-resource settings.",
"Varying % of Accessible Neighbor Entity Pairs.",
"Our collective framework is special in its collective inference stage, where the collective model refines the candidate prediction of an entity pair by considering its neighbor entity pairs' candidates.",
"We hypothesize that the more neighbor entity pairs the collective model has access to, the better performance it should achieve.",
"For example, if we use a portion of 50%, candidate predictions for only half of the neighbor entity pairs rather than the entire l will be used in Eq.",
"7.",
"We vary the portion from 25% to 100% (used in our main experiments in Tab. 3).",
"As shown in Fig. 6b, even accessing 25% can make CoRI outperform OpenKI.",
"As the percentage increases, CoRI continues to improve, while OpenKI remains the same because it is local, i.e., not using candidate predictions.",
"In Fig. 7, we show two cases from ReVerb + Freebase where CoRI corrects the mistakes of OpenKI in the collective inference stage.",
"In the first case, the source relation is in between Iowa and Mahaska County is extracted but in the wrong direction.",
"OpenKI just straightforwardly predicts containedby based on the surface form, but fails to leverage the neighbor relations to infer that Iowa is a larger geographical area.",
"With the collective model, CoRI is able to use the other two candidate predictions of containedby to flip the wrong prediction to contains .",
"In the second case, a prediction is needed between Bily Joel and Columbia.",
"Here the source relation was in and the object entity Columbia are both ambiguous, which can refer to geographical containment with a place or membership to a company.",
"OpenKI makes no prediction due to the ambiguity, while CoRI makes the right prediction music label by collectively working on the other entity pairs, where all predictions coherently indicate that Columbia is a music company.",
"Relation integration has been studied by both the database (DB) and the NLP communities.",
"The DB community formulates it as schema matching that aligns the schemas of two tables, e.g., matching columns of an is in table to those of another subarea of table (Rahm and Bernstein, 2001; Cafarella et al., 2008; Kimmig et al., 2017).",
"Such table-level alignment is valid since all rows in an is in table should have the same semantics, i.e., being geographical containment or not.",
"However, in open IE, predictions should be made at the entity pair level because of the ambiguous nature of source relations.",
"Putting all extracted is in entity pairs into one table to conduct schema matching is problematic from the first step since the entity pairs may have different ground truths.",
"The NLP community, on the other hand, investigates the problem at the entity pair level.",
"Besides manually designed rules (Soderland et al., 2013), most works leverage the link structure between entities and relations.",
"Universal schema (Riedel et al., 2013) learns embeddings of entities and middle relations between entity pairs through decomposing their co-occurrence matrix.",
"However, the entity embeddings make it not generalize to unseen entities.",
"Other methods (Toutanova et al., 2015; Verga et al., 2016, 2017; Gupta et al., 2019) also exploit middle relations, but eliminate entity parameters.",
"Zhang et al. (2019) moves one step further by explicitly considering neighbor relations, leveraging more context from the local link structure.",
"Some works (Weston et al., 2013; Angeli et al., 2015) directly minimize the distance between embeddings of relations sharing the same entity pairs.",
"Yu et al. (2017) further leverage compositional representations of entity names instead of using free parameters to deal with unseen entities at test time.",
"There are also works on Open IE canonicalization that cluster source relations.",
"Some use entity pairs as clustering signals (Yates and Etzioni, 2009; Nakashole et al., 2012; Galarraga et al., 2014), while others use lexical features or side information (Min et al., 2012; Vashishth et al., 2018).",
"However, the clusters are not finally aligned to relations in target KGs, different from our problem.",
"The two-stage collective inference framework has been explored in other problems like entity linking (Cucerzan, 2007; Guo et al., 2013; Shen et al., 2012), where candidate entities are generated for each mention independently, and collectively ranked based on their compatibility in the second stage.",
"In machine translation, an effective approach to leverage monolingual corpus in the target language is to back-translate it to the source language to augment the limited parallel corpus (Sen-nrich et al., 2016).",
"The above works inspired us to use collective inference for relation integration and leverage the unmatched KG for data augmentation.",
"Another approach to perform collective inference is to solve learning problem with constraints, such as integer linear programming (Roth and Yih, 2004), posterior regularization (Ganchev et al., 2010), and conditional random fields (Laf-ferty et al., 2001).",
"Comparing to our approach, these methods usually involve heavy computation, or are hard to optimize.",
"Examining the performance of these methods is an interesting future direction.",
"Besides, we also adopted ideas of selecting samples from out-domain data similar to in-domain samples (Xu et al., 2020; Du et al., 2020) to select our pseudo parallel data.",
"In this paper, we proposed CoRI, a collective inference approach to relation integration.",
"To the best of our knowledge, this is the first work exploring this idea.",
"We devised a two-stage framework, where the candidate generation stage employs existing local models to make candidate predictions, and the collective inference stage refines the candidate predictions by enforcing global coherence.",
"Observing that the target KG is rich in substructures indicating the desired global coherence, we further proposed exploiting the unmatched KG by selectively synthesizing pseudo parallel data to augment the training of our collective model.",
"Our solution significantly outperforms all baselines on two datasets, indicating the effectiveness of our approaches.",
"We would like to thank Prashant Shiralkar, Hao Wei, Colin Lockard, Binxuan Huang, and all the reviewers for their insightful comments and suggestions."
] | [
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"method",
"other",
"method",
"objective",
"objective",
"method",
"objective",
"result",
"other"
] |
[
"Traditional Question Generation (TQG) aims to generate a question given an input passage and an answer.",
"When there is a sequence of answers, we can perform Sequential Question Generation (SQG) to produce a series of interconnected questions.",
"Since the frequently occurred information omission and coreference between questions, SQG is rather challenging.",
"Prior works regarded SQG as a dialog generation task and recurrently produced each question.",
"However, they suffered from problems caused by error cascades and could only capture limited context dependencies.",
"To this end, we generate questions in a semi-autoregressive way.",
"Our model divides questions into different groups and generates each group of them in parallel.",
"During this process, it builds two graphs focusing on information from passages, answers respectively and performs dual-graph interaction to get information for generation.",
"Besides, we design an answer-aware attention mechanism and the coarse-to-fine generation scenario.",
"Experiments on our new dataset containing 81.9K questions show that our model substantially outperforms prior works.",
"Question Generation (QG) aims to teach machines to ask human-like questions from a range of inputs such as natural language texts (Du et al., 2017), images (Mostafazadeh et al., 2016) and knowledge bases (Serban et al., 2016).",
"In recent years, QG has received increasing attention due to its wide applications.",
"Asking questions in dialog systems can enhance the interactiveness and persistence of human-machine interactions (Wang et al., 2018).",
"QG bene-fits Question Answering (QA) models through data augmentation (Duan et al., 2017) and joint learning (Sun et al., 2019).",
"It also plays an important role in education (Heilman and Smith, 2010) and clinical (Weizenbaum et al., 1966) systems.",
"Traditional Question Generation (TQG) is de-fined as the reverse task of QA, i.e., a passage and an answer (often a certain span from the passage) are provided as inputs, and the output is a question grounded in the input passage targeting on the given answer.",
"When there is a sequence of answers, we can perform Sequential Question Generation (SQG) to produce a series of interconnected questions.",
"Table 1 shows an example comparing the two tasks.",
"Intuitively, questions in SQG are much more concise and we can regard them with given answers as QA-style conversations.",
"Since it is more natural for human beings to test knowledge or seek information through coherent questions (Reddy et al., 2019), SQG has wide applications, e.g., enabling virtual assistants to ask questions based on previous discussions to get better user experiences.",
"SQG is a challenging task in two aspects.",
"First, information omissions between questions lead to complex context dependencies.",
"Second, there are frequently occurred coreference between questions.",
"Prior works regarded SQG as a dialog generation task (namely conversational QG) where questions are generated autoregressively (recurrently), i.e., a new question is produced based on previous outputs.",
"Although many powerful dialog generation models can be adopted to address the challenges mentioned above, there are two major obstacles.",
"First, these models suffer from problems caused by error cascades.",
"Empirical results from experiments reveal that the later generated questions tend to become shorter with lower quality, especially becoming more irrelevant to given answers, e.g., Why?, What else?.",
"Second, models recurrently generating each question struggle to capture complex context dependencies, e.g., long-distance coreference.",
"Essentially, SQG is rather different from dialog generation since all answers are given in advance and they act as strict semantic constraints during text generation.",
"To deal with these problems, we perform SQG in a semi-autoregressive way.",
"More specifically, we divide target questions into different groups (ques-tions in the same group are closely-related) and generate all groups in parallel.",
"Especially, our scenario becomes non-autoregressive if each group only contains a single question.",
"Since we eliminate the recurrent dependencies between questions in different groups, the generation process is much faster and our model can better deal with the problems caused by error cascades.",
"To get information for the generation process, we perform dual-graph interaction where a passage-info graph and an answer-info graph are constructed and iteratively updated with each other.",
"The passage-info graph is used for better capturing context dependencies, and the answer-info graph is used to make generated questions more relevant to given answers with the help of our answer-aware attention mechanism.",
"Besides, a coarse-to-fine text generation scenario is adopted for the coreference resolution between questions.",
"Prior works performed SQG on CoQA (Reddy et al., 2019), a high-quality dataset for conversational QA.",
"As will be further illustrated, a number of data in CoQA are not suitable for SQG.",
"Some researchers (Gao et al., 2019) directly discarded these data, but the remaining questions may become incoherent, e.g., the antecedent words for many pronouns are unclear.",
"To this end, we build a new dataset from CoQA containing 81.9K relabeled questions.",
"Above all, the main contributions of our work are: We build a new dataset containing 7.2K passages and 81.9K questions from CoQA.",
"It is the first dataset specially built for SQG as far as we know.",
"We perform semi-autoregressive SQG under dual-graph interaction.",
"This is the first time that SQG is not regarded as a dialog generation task.",
"We also propose an answer-aware attention mechanism and a coarse-to-fine generation scenario for better performance.",
"We use extensive experiments to show that our model outperforms previous work by a substantial margin.",
"Further analysis illustrated the impact of different components.",
"Dataset for this paper is available at https:// github.com/ChaiZ-pku/Sequential-QG .",
"TQG was traditionally tackled by rule-based methods (Lindberg et al., 2013; Mazidi and Nielsen, 2014; Hussein et al., 2014; Labutov et al., 2015), e.g., filling handcrafted templates under certain transformation rules.",
"With the rise of data-driven learning approaches, neural networks (NN) have gradually taken the mainstream.",
"Du et al. (2017) pioneered NN-based QG by adopting the Seq2seq architecture (Sutskever et al., 2014).",
"Many ideas were proposed since then to make it more powerful, including answer position features (Zhou et al., 2017), specialized pointer mechanism (Zhao et al., 2018), self-attention (Scialom et al., 2019), answer separation (Kim et al., 2019), etc.",
"In addition, enhancing the Seq2seq model into more complicated structures using variational inference, adversarial training and reinforcement learning (Yao et al., 2018; Kumar et al., 2019) have also gained much attention.",
"There are also some works performing TQG under certain constraints, e.g., controlling the topic (Hu et al., 2018) and difficulty (Gao et al., 2018) of questions.",
"Besides, combining QG with QA (Wang et al., 2017; Tang et al., 2017; Sun et al., 2019) is also focused by many researchers.",
"As human beings tend to use coherent questions for knowledge testing or information seeking, SQG plays an important role in many applications.",
"Prior works regarded SQG as a dialog generation task (namely conversational QA).",
"Pan et al. (2019) pre-trained a model performing dialog generation, and then fine-tuned its parameters by reinforcement learning to make generated questions relevant to given answers.",
"Gao et al. (2019) iteratively generated questions from previous outputs and leveraged off-the-shelf coreference resolution models to introduce a coreference loss.",
"Besides, additional human annotations were performed on sentences from input passages for conversation flow modeling.",
"Since SQG is essentially different from dialog generation, we discard its dialog view and propose the first semi-autoregressive SQG model.",
"Compared with using the additional human annotation in Gao et al. (2019), our dual-graph interaction deals with context dependencies automatically.",
"Besides, our answer-aware attention mechanism is much simpler than the fine-tuning process in Pan et al. (2019) to make outputs more answer-relevant.",
"As the reverse task of QA, QG is often performed on existing QA datasets, e.g., SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), etc.",
"However, questions are independent in most QA datasets, making TQG the only choice.",
"In recent years, the appearance of large-scale conversational QA datasets like CoQA (Reddy et al., 2019) and QuAC (Choi et al., 2018) makes it possible to train data-driven SQG models, and the CoQA dataset was widely adopted by prior works.",
"Since the test set of CoQA is not released to the public, its training set (7.2K passages with 108.6K questions) was split into new training and validation set, and its validation set (0.5K passages with 8.0K questions) was used as the new test set.",
"responding evidence highlighted in the passage.",
"This brings a big trouble for QG.",
"As an example, consider the yes/no questions counting for 19.8% among all questions.",
"Given the answer yes and a corresponding evidence ...the group first met on July 5 , 1967 on the campus of the Ohio state uni-versity... , there are many potential outputs, e.g., Did the group first met in July? , Was the group first met in Ohio state? .",
"When considering the context formed by previous questions, the potential outputs become even more (the original question in CoQA is Was it founded the same year? ).",
"When there are too many potential outputs with significantly different semantic meanings, training a converged QG model becomes extremely difficult.",
"For this reason, Gao et al. (2019) directly discarded questions that cannot be answered by spans from passages.",
"However, the remaining questions can become incoherent, e.g., antecedent words for many pronouns become unclear.",
"To this end, we build a new dataset from CoQA by preserving all 7.7K passages and rewriting all questions and answers.",
"More specifically, we first discarded questions that are unsuitable for SQG.",
"To do so, three annotators were hired to vote for the preservation/deletion of each question.",
"A question is preserved if and only if it can be answered by a certain span from the input passage 2 .",
"As a result, most deleted questions were yes/no questions and unanswerable questions.",
"Besides, the kappa score between results given by different annotators was 0.83, indicating that there was a strong interagreement between annotators.",
"For the remaining QA-pairs, we preserved their original order and replaced all answers by spans from input passages.",
"After that, we rewrote all questions to make them coherent.",
"To avoid over-editing, annotators were asked to modify as little as possible.",
"It turned out that in most cases, they only needed to deal with coreference since the prototype of pronouns were no longer existed.",
"To further guarantee the annotation quality, we hired another project manager who daily examined 10% of the annotations from each annotator and provided feedbacks.",
"The annotation was considered valid only when the accuracy of examined results surpasses 95%.",
"Our annotation process took 2 months, and we finally got a dataset containing 7.7K passage with 81.9K QA-pairs.",
"2 Using certain spans from input passages (instead of free-formed text) as answers is a conversion in QG.",
"In this way, the number of potential output questions is greatly reduced.",
"As shown in Figure 1, the model first builds a passage-info graph and an answer-info graph by its passage-info encoder and answer-info encoder respectively.",
"After that, it performs dual-graph interaction to get representations for the decoder.",
"Finally, different groups of questions are generated in parallel under a coarse-to-fine scenario.",
"Both encoders and decoder take the form of Transformer architecture (Vaswani et al., 2017).",
"In SQG, we input a passage composed by n sentences P = { S i } ni =1 and a sequence of l answers { A i } li =1 , each A i is a certain span of P .",
"The target output is a series of questions { Q i } li =1 , where Q i can be answered by A i according to the input passage P and previous QA-pairs.",
"As mentioned above, we perform SQG in an semi-autoregressive way, i.e., target questions are divided into into different groups.",
"Ideally, questions in the same group are expected to be closely-related, while questions in different groups should be as independent as possible.",
"Our model takes a simple but effective unsupervised question clustering method.",
"The intuition is: if two answers come from the same sentence, the two corresponding questions are likely to be closely-related.",
"More specifically, if the k -th sentence S k contains p answers from { A i } li =1 , we cluster them into an answer-group G ansk = { A j 1 , A j 2 , ..., A j p } where j 1 < j 2 < ... < j p are continuous indexes from { 1 , 2 , ..., l } .",
"By replacing each answer in G ansk with its corresponding question, we get a question-group G quesk = { Q j 1 , Q j 2 , ..., Q j p } , and we further define a corresponding target-output T k as Q j 1 [ sep ] Q j 2 [ sep ] ... [ sep ] Q j p where [ sep ] is a special token.",
"In Table 1, there are four target outputs T 1 , T 2 , T 4 , T 5 (no T 3 since the third sentence in Table 1 do not contain any answer), T 2 is What was he doing there? [sep] On What? [sep] ... [sep] What was Tim doing? corresponding with the second sentence, and T 5 is What did he say? corresponding with the last sentence.",
"Supposing there are m answerand question-groups, then our model generates all the m target-outputs in parallel, i.e., all questions are generated in a semi-autoregressive way.",
"As shown in Figure 1, our passage-info encoder maps input sentences { S i } n i =1 into their sentence representations { s i } ni =1 where every s i R 2 d s .",
"We regard each sentence as a sequence of words and replace each word by its pre-trained word embeddings (Mikolov et al., 2013) which is a dense vector.",
"After that, the sequence of word embeddings is sent to a Transformer-encoder that outputs a corresponding sequence of vectors.",
"By averaging these vectors, we get the local representation s locali R d s of S i .",
"After we get the local representations of all sentences { S i } ni =1 in passage P , another Transformer-encoder is adopted to map the sequence { s locali } ni =1 into { s globali } ni =1 , where s globali R d s is called the Figure 2: Illustration of answer embeddings and an answer-attention head for the forth sentence in Table",
"In other words, the passage-info encoder takes a hiarachical structure.",
"We expect the local and global representations capture intraand intersentence context dependencies respectively, and the final representation for S i is s i = [ s locali ; s globali ] R 2 d s .",
"As described in Section 4.1, the input answers are split into m answer-groups.",
"For G ansk corresponding with the k -th sentence of the input passage, we define { G ansk , S k } as a rationale R k , and further obtain its representation r k R 2 d r by our answer-info encoder, which is based on a Transformer-encoder regarding sentence S k as its input.",
"To further consider information from G ansk , two more components are added into the answer-info encoder, as shown in Figure",
"2. First, we adopt the answer-tag features.",
"For each word w i in sentence S k , the embedding layer computes [ x w i ; x a i ] R d r as its final embedding, where x w i is the pre-trained word embedding and x a i contains answer-tag features.",
"More specifically, we give w i a label from { O, B, I } if it is outside, the beginning of, inside of any answer from G ansk , and use a vector corresponding with this label as x a i .",
"Second, we design the answer-aware attention mechanism.",
"In the multi-head attention layer, there are not only l h vanilla self-attention heads, but also l a answer-aware heads for each answer in G ansk .",
"In an answer-aware head corresponding with answer A , words not belonging to A are masked out during the attention mechanism.",
"The output of the Transformer-encoder is a sequence of vectors H enck = { h enck } ( h enck R d r ) corresponding with the input word sequence from S k .",
"In our SQG task, the input passage contain n sentences, which can be represented by { s i } ni =1 R 2 d s leveraging the passage-info encoder.",
"Among all input sentences, only m of them contain certain answers ( m n ), and we further define m rationales based on these sentences, { G ansF ( j ) , SF ( j ) } mj =1 , where the j -th rationale ( j { 1 , 2 , ..., m } ) corresponds with the F ( j ) -th sentence of the input passage ( F ( j ) { 1 , 2 , ..., n } ).",
"For the example in Table 1, n = 5 , m = 4 , F ( j ) maps { 1 , 2 , 3 , 4 } into { 1 , 2 , 4 , 5 } respectively.",
"Using the answer-info encoder, we can get representations { r F ( j ) } mj =1 R 2 d s for all rationales.",
"We further build a passage-info graph V and an answer-info graph U based on these representations.",
"For the rationale corresponding with the k -th sentence of the input passage, we add node u k , v k in graph U , V respectively.",
"For the example in Table 1, U is compused by { u 1 , u 2 , u 4 , u 5 } and V is compused by { v 1 , v 2 , v 4 , v 5 } , as shown in Figure",
"1. The initial representation for u k is computed by: u (0) k = ReLU ( W u [ r k ; e k ] + b u ) R d g (1) where r k R 2 d r is the rationale representation, e k R d e is the embedding of index k , and W u R ( d e +2 d r ) d g , b u R d g are trainable parameters.",
"And the initial representation for v k is: v (0) k = ReLU ( W v [ s k ; e k ] + b v ) R d g (2) where s k R 2 d s is the sentence representation and W v R ( d e +2 d s ) d g , b v R d g are parameters.",
"After adding these points, there are m nodes in U and V respectively.",
"For u i , u j U corresponding with the i -th, j -th input sentences respectively, we add an edge between them if | i j | < ( is a hyper-parameter).",
"Similarly, we add edges into V and the two graphs are isomorphic.",
"In our answer-info graph U , node representations contain information focused on input answers.",
"In the passage-info graph V , node representations capture interand intra-sentence context dependencies.",
"As mentioned above, a good question should be answer-relevant as well as capturing complex context dependencies.",
"So we should combine information in both U and V .",
"Our dual-graph interaction is a process where U and V iteratively update node representations with each other.",
"At time step t , representations u ( t 1) i , v ( t 1) i are updated into u ( t ) i , v ( t ) i respectively under three steps.",
"First, we introduce the information transfer step .",
"Taking U as an example.",
"Each u ( t 1) i receives a ( t ) i from its neighbors (two nodes are neighbors if there is an edge between them) by: a ( t ) i = (cid:88) u j N ( u i ) W ij u ( t 1) j + b ij (3) where N ( u i ) is composed by all neighbors of node u i and W ij R d g d g , b ij R d g are parameters controlling the information transfer.",
"For u i , u j and u i (cid:48) , u j (cid:48) whose | i j | = | i (cid:48) j (cid:48) | , we use the same W and b .",
"In other words, we can first create a sequence of matrices { W 1 , W 2 , ... } R d g d g and vectors { b 1 , b 2 , ... } R d g , and then use | i j | as the index to retrieve the corresponding W ij , b ij .",
"For graph V , we similarly compute a ( t ) i = (cid:88) v j N ( v i ) W ij v ( t 1) j + b ij (4) In the second step, we compute multiple gates .",
"For each u ( t 1) i in U , we compute an update gate y ( t ) i and a reset gate z ( t ) i by: y ( t ) i = ( W y [ a ( t ) i ; u ( t 1) i ]) z ( t ) i = ( W z [ a ( t ) i ; u ( t 1) i ]) (5) where W y , W z R 2 d g d g are paramenters.",
"Similarly, for each v ( t 1) i in V we compute: y ( t ) i = ( W y [ a ( t ) i ; v ( t 1) i ]) z ( t ) i = ( W z [ a ( t ) i ; v ( t 1) i ]) (6) Finally, we perform the information interaction , where each graph updates its node representations under the control of gates computed by the other graph .",
"More specifically, node representations are updated by: u ( t ) i = z ( t ) i (cid:12) u ( t 1) i + ( 1 z ( t ) i ) (cid:12) tanh ( W a [ a ( t ) i ; y ( t ) i (cid:12) u ( t 1) i ]) v ( t ) i = z ( t ) i (cid:12) v ( t 1) i + ( 1 z ( t ) i ) (cid:12) tanh ( W a [ a ( t ) i ; y ( t ) i (cid:12) v ( t 1) i ]) (7) The idea of using gates computed by the other graph to update node representations in each graph enables the information in input passage and answers interact more frequently, both of which act as strong constraints to the output questions.",
"By iteratively performing the three steps for T times, we get the final representations u ( T ) i and v ( T ) i for u i U and v i V .",
"For the k -th input sentence S k containing certain answers, our decoder generates the corresponding target-output T k .",
"As mentioned above, the generation process of all target-outputs are independent.",
"The decoder is based on the Transformer-decoder containing a (masked) multi-head self-attention layer, a multi-head encoder-attention layer, a feed-forward projection layer and the softmax layer.",
"To compute keys and values for the multi-head encoder-attention layer, it leverages the outputs from our answer-info encoder, i.e., it uses H enck described in Section 4.3 to generate T k corresponding with the k -th sentence.",
"To generate coherent questions, we need to capture the context dependencies between input answers and passages.",
"To this end, both u ( T ) k and v ( T ) k , which comes from the dual-graph interaction process, are used as additional inputs for generating T k .",
"First, they are concatenated with the output of each head from both (masked) multi-head self-attention layer and multi-head encoder-attention layer before sending to the next layer.",
"Second, they are concatenated with inputs of the feed-forward projection layer.",
"The two representations are also expected to make generated questions more relevant to given inputs.",
"Since the semi-autoregressive generation scenario makes it more challenging to deal with coreferences between questions (especially questions in different groups), we perform question generation in a coarse-to-fine manner.",
"The decoder only needs to generate coarse questions where all pronouns are replaced by a placeholder [p] .",
"To get final results, we use an additional pre-trained coreference resolution model to fill pronouns into different placeholders.",
"To make a fair comparison, we use the coreference resolution model (Clark and Manning, 2016) adopted by prior works CoreNQG (Du and Cardie, 2018) and CorefNet (Gao et al., 2019).",
"In this section, we first introduce the three kinds of baselines.",
"After that, we compare and analyse the results of different models under both automatic and human evaluation metrics.",
"We compared our model with seven baselines that can be divided into three groups.",
"First, we used three TQG models: the Seq2seq (Du et al., 2017) model which pioneered NN-based QG, the CopyNet (See et al., 2017) model that introduced pointer mechanism, and CoreNQG (Du and Cardie, 2018) which used hybrid features (word, answer and coreference embeddings) for encoder and adopted copy mechanism for decoder.",
"Second, since prior works regarded SQG as a conversation generation task, we directly used two powerful multi-turn dialog systems: the latent variable hierarchical recurrent encoder-decoder architecture VHRED (Serban et al., 2017) , and the hierarchical recurrent attention architecture HRAN (Xing et al., 2018) .",
"Third, we used prior works mentioned above.",
"For Pan et al. (2019), we adopted the ReDR model which had the best performance.",
"For Gao et al. (2019), we used the CorefNet model.",
"Although a CFNet in this paper got better results, it required additional human annotations denoting the relationship between input sentences and target questions.",
"So it is unfair to compare CFNet with other methods.",
"It is worth mentioning that when generating questions using the second and third groups of baselines, only previously generated outputs were used as dialog history, i.e., the gold standard questions are remain unknown (in some prior works, they were directly used as dialog history, which we think is inappropriate in practice).",
"Following the conventions, we used BLEU (Pa-pineni et al., 2002), ROUGE-L (Lin, 2004) and METEOR (Lavie and Agarwal, 2007) as automatic evaluation metrics.",
"We also computed the average word-number of generated questions.",
"As shown in Table 2, our semi-autoregressive model outperformed other methods substantially.",
"When we focus on the second and third groups of baselines regarding SQG as multi-turn dialog generation tasks, we can find that models from the third group are more powerful since they make better use of information from input passages.",
"Besides, models from the second group tend to generate shortest questions.",
"Finally, similar to the problem that dialog systems often generate dull and responses, these models also suffer from producing general but meaningless questions like What?, How?, And else?.",
"When we compare the first and third groups of baselines (which are all QG models), it is not surprising that SQG models show more advantages than TQG models, as they take the relationships between questions into consideration.",
"Besides, CorefNet gets better performance among all baselines, especially ReDR.",
"This indicates that comparing with implicitly performing reinforcement learning through QA models, explicitly using target answers as inputs can be more effective.",
"Note that if we directly compare the performance between SQG task and TQG task under the same model (e.g., the Seq2seq model), evaluation scores for TQG tasks are much higher, which is not surprising since SQG is harder than TQG dealing with dependencies between questions.",
"Another fact lies in the computation of automatic evaluation metrics.",
"As shown in Table 2, questions in SQG datasets are much shorter than TQG.",
"Since our automatic evaluation metrics are based on n -gram overlaps between generated and gold standard questions, the scores significantly go down with the growth of n (for this reason, the BLEU 4 scores are not listed in Table 2).",
"This also illustrates the importance of performing human evaluation.",
"It is generally acknowledged that automatic evaluation metrics are far from enough for SQG.",
"So we perform human evaluation in five aspects.",
"Fluency measures if a question is grammatically correct and is fluent to read.",
"Coherence measures if a question is coherent with previous ones.",
"Coreference measures if a question uses correct pronouns.",
"Answerability measures if a question is targeting on the given answer.",
"Relevance measures if a question is grounded in the given passage.",
"Since performing human evaluation is rather expensive and time-consuming, we picked up the best TQG model (CoreNQG), SQG model (CorefNet) to compare with our model.",
"We randomly selected 20 passages from the test set with 207 given answers and asked 10 native speakers to evaluate the outputs of each model independently.",
"Under each aspect, reviewers are asked to choose a score from { 1, 2, 3 } , where 3 indicates the best quality.",
"The average scores for each evaluation metric are shown in Table",
"4. We can find that our model gets the best or competitive performance in each metric.",
"When it comes to fluency, all models get high performance, and the CorefNet that outputs BLEU 3 ROUGE METEOR No interact 11.35 37.31 17.05 Uni-graph 9.86 36.44 15.87 Uni-heads 10.33 37.48 16.24 No co2fine 11.75 37.92 17.17 Non-auto 7.79 33.62 14.83 Ours 12.06 38.15 17.26 Table 5: Results for ablation tests.",
"shortest questions gets the best score.",
"As for coherence, CoreNQG gets poor results since it generates questions independently.",
"When it comes to coreference, our model only slightly lower than CorefNet, which added direct supervision to attention weights by a coreference resolution model.",
"Finally, our model gets the best performance on both answer-abity and relevance.",
"However, it is worth noticing that all models get rather poor performances under these two aspects, indicating that making a concise question meaningful (i.e., targeting on given answers) with more information from input passage (i.e., performing proper information elimination) is a major challenge in SQG.",
"Besides, as pointed out by Table 3, questions in our SQG dataset are significantly shorter compared with TQG dataset, making subtle errors much easier to be noticed.",
"In this section, we perform ablation test to verify the influence of different components in our model.",
"First, we modify Equation 7 into u ( t ) i = z ( t ) i (cid:12) u ( t 1) i + ( 1 z ( t ) i ) (cid:12) tanh ( W a [ a ( t ) i ; y ( t ) i (cid:12) u ( t 1) i ]) v ( t ) i = z ( t ) i (cid:12) v ( t 1) i + ( 1 z i ( t ) ) (cid:12) tanh ( W a [ a ( t ) i ; y ( t ) i (cid:12) v ( t 1) i ]) (8) to get the no interact model, i.e., two graphs are independently updated without any interaction.",
"Second, we build a uni-graph model by removing the passage-info encoder (the remaining rationale graph is updated similarly to Li et al. (2015)).",
"Third, we discard the attention-aware heads in the rationale encoder to get a uni-heads model.",
"Then, we build the no co2fine model without the coarse-to-fine generation scenario.",
"Finally, we build a non-auto model that performs SQG in an non-autoregressive way, i.e., each question is generated in parallel.",
"Peter was a very sad puppy.",
"He had been inside of the pet store for a very long time.",
"In fact, he had been there for [three months] 1 !",
"Peter had seen many other puppies find a person; he began to wonder why he could not get one.",
"He thought that [maybe his fur was not pretty enough or maybe his bark was not loud enough] 2 .",
"He tried and tried to please every person who came to the store, but they all picked smaller puppies.",
"However, one day all of this changed.",
"[Sammie] 3 came into the store looking for [a golden puppy] 4 .",
"She wanted a puppy she could snuggle with.",
"It so happened that Peter was very sad and tired that day.",
"Sammie came to hold him.",
"Peter wanted to show off [his bark] 5 , but he was [too tired] 6 .",
"He [fell right to sleep] 7 .",
"Sammie loved him at once and loved holding him in her arms.",
"Sammie took [Peter] 8 home that day, and they made lots of fun memories.",
"As shown in Table 5, each component in our model plays an important part.",
"Results for the no interact model indicate that compared with independently updating the passage-info graph and answer-info graph, making these information more interacted by our dual-graph interaction scenario is more powerful.",
"Not surprisingly, the uni-graph model removing the passage encoder (i.e., less focusing on context dependencies between sentences from input passage), and the uni-heads model discarding our answer-aware attention mechanism (i.e., less focusing on given answers) get significant worse performance compared with our full model.",
"Besides, our coarse-to-fine scenario helps to better deal with the dependencies between questions since there are widespread coreferences.",
"Finally, although the architecture of non-auto model is a special case of our model where each group only contains a single question, the performance drops significantly, indicating the importance of using semi-autoregressive generation.",
"However, the dual-graph interaction still makes its performance better than the Seq2seq and CopyNet in Table",
"2. 6.2 Running Examples In Table 6, we present some generated examples comparing our model and the strongest baseline CorefNet.",
"On the one hand, our model performs better than CorefNet, especially that the output questions are more targeting on given answers (turn 2, 6, 7).",
"It also correctly deals with coreferences (e.g., distinguishing Peter and Sammie).",
"On the other hand, the generated questions have poor quality when gold standard questions involve more reasoning (turn 2, 6).",
"Besides, the gold standard questions are more concise as well (turn 4, 6).",
"In this paper, we focus on SQG which is an important yet challenging task.",
"Different from prior works regarding SQG as a dialog generation task, we propose the first semi-autoregressive SQG model, which divides questions into different groups and further generates each group of closely-related questions in parallel.",
"During this process, we first build a passage-info graph, an answer-info graph, and then perform dual-graph interaction to get representations capturing the context dependencies between passages and questions.",
"These representations are further used during our coarse-to-fine generation process.",
"To perform experiments, we analyze the limitation of existing datasets and create the first dataset specially used for SQG containing 81.9K questions.",
"Experimental results show that our model outperforms previous works by a substantial margin.",
"For future works, the major challenge is generating more meaningful, informative but concise questions.",
"Besides, more powerful question clustering and coarse-to-fine generation scenarios are also worth exploration.",
"Finally, performing SQG on other types of inputs, e.g., images and knowledge graphs, is an interesting topic.",
"This work was supported by National Natural Science Foundation of China (61772036) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology).",
"We thank the anonymous reviewers for their helpful comments.",
"Xiaojun Wan is the corresponding author."
] | [
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"other",
"method",
"objective",
"method",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types' complex interdependencies.",
"We study the ability of box embeddings , which embed concepts as d -dimensional hyperrectangles, to capture hierarchies of types even when these relationships are not defined explicitly in the ontology.",
"Our model represents both types and entity mentions as boxes.",
"Each mention and its context are fed into a BERT-based model to embed that mention in our box space; essentially, this model leverages typological clues present in the surface text to hypothesize a type representation for the mention.",
"Box containment can then be used to derive both the posterior probability of a mention exhibiting a given type and the conditional probability relations between types themselves.",
"We compare our approach with a vector-based typing model and observe state-of-the-art performance on several entity typing benchmarks.",
"In addition to competitive typing performance, our box-based model shows better performance in prediction consistency (predicting a supertype and a subtype together) and confidence (i.e., calibration), demonstrating that the box-based model captures the latent type hierarchies better than the vector-based model does.",
"1 1 Introduction The development of named entity recognition and entity typing has been characterized by a growth in the size and complexity of type sets: from 4 (Tjong Kim Sang and De Meulder, 2003) to 17 (Hovy et al., 2006) to hundreds (Weischedel and Brun-stein, 2005; Ling and Weld, 2012) or thousands (Choi et al., 2018).",
"These types follow some kind 1 The code is available at https://github.com/ yasumasaonoe/Box4Types .",
"of hierarchical structure (Weischedel and Brunstein, 2005; Ling and Weld, 2012; Gillick et al., 2014; Murty et al., 2018), so effective models for these tasks frequently engage with this hierarchy explicitly.",
"Prior systems incorporate this structure via hierarchical losses (Murty et al., 2018; Xu and Barbosa, 2018; Chen et al., 2020) or by embedding types into a high-dimensional Euclidean or hyperbolic space (Yogatama et al., 2015; Lopez and Strube, 2020).",
"However, the former approach requires prior knowledge of the type hierarchy, which is unsuitable for a recent class of large type sets where the hierarchy is not explicit (Choi et al., 2018; Onoe and Durrett, 2020a).",
"The latter approaches, while leveraging the inductive bias of hyperbolic space to represent trees, lack a probabilistic interpretation of the embedding and do not naturally capture all of the complex type relationships beyond strict containment.",
"In this paper, we describe an approach that represents entity types with box embeddings in a high-dimensional space (Vilnis et al., 2018).",
"We build an entity typing model that jointly embeds each entity mention and entity types into the same box space to determine the relation between them.",
"Volumes of boxes correspond to probabilities and taking intersections of boxes corresponds to computing joint distributions, which allows us to model mention-type relations (what types does this mention ex-hibit?) and type-type relations (what is the type hierarchy?).",
"Concretely, we can compute the conditional probability of a type given the entity mention with straightforward volume calculations, allowing us to construct a probabilistic type classification model.",
"Compared to embedding types as points in Euclidean space (Ren et al., 2016a), the box space is expressive and suitable for representing entity types due to its geometric properties.",
"Boxes can nest, overlap, or be completely disjoint to capture The Hunger Games, the first of 3 best selling books by Suzanne Collins .",
"subtype, correlation, or disjunction relations, properties which are not explicitly manifested in Euclidean space.",
"The nature of the box computation also allows these complex relations to be represented in a lower-dimensional space than needed by vector-based models.",
"In our experiments, we focus on comparing our box-based model against a vector-based baseline.",
"We evaluate on four entity typing benchmarks: Ultra-fine Entity Typing (Choi et al., 2018), OntoNotes (Gillick et al., 2014), BBN (Weischedel and Brunstein, 2005), and FIGER (Ling and Weld, 2012).",
"To understand the behavior of box embeddings, we further analyze the model outputs in terms of consistency (predicting coherent supertypes and subtypes together), robustness (sensitiv-ity against label noise), and calibration (i.e., model confidence).",
"Lastly, we compare entity representations obtained by the box-based and vector-based models.",
"Our box-based model outperforms the vector-based model on two benchmarks, Ultra-fine Entity Typing and OntoNotes, achieving state-of-the-art-performance.",
"In our other experiments, the box-based model also performs better at predicting supertypes and subtypes consistently and being robust against label noise, indicating that our approach is capable of capturing the latent hierarchical structure in entity types.",
"When predicting class labels like entity types that exhibit a hierarchical structure, we naturally want our model's output layer to be sensitive to this structure.",
"Previous work (Ren et al., 2016a; Shimaoka et al., 2017; Choi et al., 2018; Onoe and Durrett, 2019, inter alia) has fundamentally treated types as vectors, as shown in the left half of Figure",
"1. As is standard in multiclass or multi-label classification, the output layer of these models typically involves taking a dot product between a mention embedding and each possible type.",
"A type could be more general and predicted on more examples by having higher norm, 2 but it is hard for these representations to capture that a coarse type like Person will have many mutually orthogonal subtypes.",
"By contrast, box embeddings naturally represent these kinds of hierarchies as shown in the right half of Figure",
"1. A box that is completely contained in another box is a strict subtype of that box: any entity exhibiting the inner type will exhibit the outer one as well.",
"Overlapping boxes like Politician and Author represent types that are not related in the type hierarchy but which are not mutually exclusive.",
"The geometric structure of boxes enables complex interactions with only a moderate number of dimensions (Dasgupta et al., 2020).",
"Vilnis et al. (2018) also define a probability measure over the box space, endowing it with probabilistic semantics.",
"If the boxes are restricted to a unit hypercube, for example, the volumes of type boxes represent priors on types and intersections capture joint probabilities, which can then be used to derive conditional probabilities.",
"Critically, box embeddings have previously been trained explicitly to reproduce a given hierarchy such as WordNet.",
"A central question of this work is whether box embeddings can be extended to model the hierarchies and type relationships that are implicit in entity typing data: we do not assume access to explicit knowledge of a hierarchy during training.",
"While some datasets such as OntoNotes have orderly ontologies, recent work on entity typing has often focused on noisy type sets from crowdworkers (Choi et al., 2018) or derived from Wikipedia (Onoe and Durrett, 2020a).",
"We show that box embeddings can learn these structures organically; in fact, they are not restricted to only tree structures, but enable a natural Venn-diagram style of representation for concepts, as 2 We do not actually observe this in our vector-based model.",
"Our box embeddings represent entity types as n -dimensional hyperrectangles.",
"A box x is characterized by two points ( x m , x M ) , where x m , x M R d are the minimum and the maximum corners of the box x and x m,i x M,i for each coordinate i { 1 , ..., d } .",
"The volume of the box x is computed as Vol ( x ) = i ( x M,i x m,i ) .",
"If we normalize the volume of the box space to be 1 , we can interpret the volume of each box as the marginal probability of a mention exhibiting the given entity type.",
"Furthermore, the intersection volume between two boxes, x and y , is defined as Vol ( x y ) = i max (min( x M,i , y M,i ) max( x m,i , y m,i ) , 0) and can be seen as the joint probability of entity types x and y .",
"Thus, we can obtain the conditional probability P ( y | x ) = Vol ( x y ) Vol ( x ) .",
"Soft boxes Computing conditional probabilities based on hard intersection poses some practical difficulties in the context of machine learning: sparse gradients caused by disjoint or completely contained boxes prevent gradient-based optimization methods from working effectively.",
"To ensure that gradients always flow for disjoint boxes, Li et al. (2019) relax the hard edges of the boxes using Gaussian convolution.",
"We follow the more recent approach of Dasgupta et al. (2020), who further improve training of box embeddings using max and min Gumbel distributions (i.e., Gumbel boxes) to represent the min and max coordinates of a box.",
"Let s denote a sequence of context words and m denote an entity mention span in s .",
"Given the input tuple ( m, s ) , the output of the entity typing model is an arbitrary number of predicted types { t 0 , t 1 , ... } T , where t k is an entity type belonging to a type inventory T .",
"Because we do not assume an explicit type hierarchy, we treat entity typing as a multi-label classification problem, or |T | independent binary classification problems for each mention.",
"Section 3.3 will describe how to use a BERT-based model to predict a mention and context box 3 x from ( m, s ) .",
"For now, we assume x is given and we are computing the probability of that mention exhibiting the k th entity type, with type box y k .",
"Each type t k T has a dedicated box y k , which is parameterized by a center vector c ky R d and an offset vector o ky R d .",
"The minimum and maximum corners of a box y k are computed as y km = ( c ky softplus ( o ky )) and y kM = ( c ky + softplus ( o ky )) respectively, so that parameters c R d and o R d yield a valid box with nonzero volume.",
"The conditional probability of the type t k given the mention and context ( m, s ) is calculated as p ( t k | m, s ) = Vol ( z k ) Vol ( x ) = Vol ( x y k ) Vol ( x ) , where z k is the intersection between x and y k ((2) and (3) in Figure 2).",
"Our final type predictions are based on thresholding these probabilities; i.e., predict the type if p > 0 .",
"5 .",
"As mentioned in Section 3.1, we use the Gumbel box approach of Dasgupta et al. (2020), in which the box coordinates are interpreted as the location parameter of a Gumbel max (resp. min) distribution with variance .",
"In this approach, the intersection 3 We could represent mentions as points instead of boxes; however, representing them as boxes enables the size of a mention box to naturally reflect epistemic uncertainty about a mention's types given limited information.",
"Vol ( x ) i softplus M,i m,i 2 , where i is an index of each coordinate and 0 .",
"5772 is the EulerMascheroni constant, 4 and softplus( x ) = 1 t log(1 + exp( xt )) , with t as an inverse temperature value.",
"We format the context words s and the mention span m as x = [CLS] m [SEP] s [SEP] and chunk into WordPiece tokens (Wu et al., 2016).",
"Using pre-trained BERT 5 (Devlin et al., 2019), we encode the whole sequence into a single vector by taking the hidden vector at the [CLS] token.",
"A highway layer (Srivastava et al., 2015) projects down the hidden vector h [CLS] R to the R 2 d space, where is the hidden dimension of the encoder (BERT), and d is the dimension of the box space.",
"This highway layer transforms representations in a vector space to the box space without impeding the gradient flow.",
"We further split the hidden vector h R 2 d into two vectors: the center point of the box c x R d and the offset from the maximum and minimum corners o x R d .",
"The minimum and maximum corners of the mention and context box are computed as x m = ( c x SOFTPLUS ( o x )) and x M = ( c x + SOFTPLUS ( o x )) , where is an element-wise sigmoid function, and SOFTPLUS is an element-wise softplus function as defined in Section 3.2 ((1) in Figure 2).",
"The output of the softplus is guaranteed to be positive, guaranteeing that the boxes have volume greater than zero.",
"The goal of training is to find a set of parameters that minimizes the sum of binary cross-entropy losses over all types over all examples in our train-4",
"train-4 From Dasgupta et al. (2020), the Euler-Mascheroni con-stant appears due to the interpretation of x m,i , x M,i as the",
"+ (1 t k gold ) log(1 p ( t k | m, s )) , where t k gold { 0 , 1 } is the gold label for the type t k .",
"We optimize this objective using gradient-based optimization algorithms such as Adam (Kingma and Ba, 2015).",
"6 4 Experimental Setup Our focus here is to shed light on the difference between type hierarchies learned by the box-based model and the vector-based model.",
"To this end, we first evaluate those two models on standard entity typing datasets.",
"Then, we test models' consistency , robustness , and calibration , and evaluate the predicted types as entity representations on a downstream task (coreference resolution).",
"See Appendix A for hyperparameters.",
"Our chief comparison is between box-based and vector-based modeling of entity types.",
"As our main baseline for all experiments, we use a vector-based version of our entity typing model.",
"We use the same mention and context encoder followed by a highway layer, but this baseline has vector-based type embeddings (i.e., a |T | d matrix), and type predictions are given by a dot product between the type embeddings and the mention and context representation followed by element-wise logistic regression.",
"This model is identical to that of Onoe and Durrett (2020b) except for the additional highway layer.",
"Entity Typing We evaluate our approach on the Ultra-Fine Entity Typing (UFET) dataset (Choi et al., 2018) with the standard splits (2k for each of train, dev, and test).",
"In addition to the manually annotated training examples, we use the denoised distantly annotated training examples from Onoe and Durrett (2019).",
"7 This dataset contains 10,331 entity types, and each type is marked as one of the three classes: coarse , fine , and ultra-fine .",
"Note 6 With large type sets, most types are highly skewed towards the negative class ( > 99% negative for many finegrained types).",
"While past work such as Choi et al. (2018) has used modified training objectives to handle this class imbalance, we did not find any modification to be necessary.",
"7 This consists of 727k training examples derived from the distantly labeled UFET data.",
"that this classification does not provide explicit hierarchies in the types, and all classes are treated equally during training.",
"Additionally, we test our box-based model on three other entity typing benchmarks that have relatively simpler entity type inventories with known hierarchies , namely OntoNotes (Gillick et al., 2014), BBN (Weischedel and Brunstein, 2005) , and FIGER (Ling and Weld, 2012).",
"See Appendix B for more details on these datasets.",
"Consistency A model that captures hierarchical structure should be aware of the relationships between supertypes and subtypes.",
"When a model predicts a subtype, we want it to predict the corresponding supertype together, even when this is not explicitly enforced as a constraint or consistently demonstrated in the data, such as in the UFET dataset.",
"That is, when a model predicts artist , person should also be predicted.",
"To check this ability, we analyze the model predictions on the UFET dev set.",
"We select 30 subtypes from the UFET type inventory and annotate corresponding supertypes for them in cases where these relationships are clear, based on their cooccurrence in the UFET training set and human intuition.",
"Based on the 30 pairs, we compute accuracy of predicting supertypes and subtypes together.",
"Table 10 in Appendix C lists the 30 pairs.",
"Robustness Entity typing datasets with very large ontologies like UFET are noisy; does our box-based model's notion of hierarchy do a better job of handling intrinsic noise in a dataset?",
"To test this in a controlled fashion, we synthetically create noisy labels by randomly dropping the gold labels with probability 13 .",
"8 We derive two noisy training sets from the UFET training set: 1) adding noise to the coarse types and 2) adding noise to fine & ultra-fine types.",
"We train on these noised datasets and evaluate on the standard UFET dev set.",
"Calibration Desai and Durrett (2020) study calibration of pre-trained Transformers such as BERT and RoBERTa (Liu et al., 2019) on natural language inference, paraphrase detection, and commonsense reasoning.",
"In a similar manner, we investigate if our box-based entity typing model is calibrated: do the probabilities assigned to types by the model match the empirical likelihoods of those types?",
"Since models may naturally have different scales 8 If this causes the gold type set to be empty, we retain the original gold type(s); however, this case is rare.",
"for their logits depending on how long they are trained, we post-hoc calibrate each of our models using temperature scaling (Guo et al., 2017) and a shift parameter.",
"We report the total error (e.g., the sum of the errors between the mean confidence and the empirical accuracy) on the UFET dev set and the OntoNotes dev set.",
"Entity Representations We are interested in the usefulness of the trained entity typing models in a downstream task.",
"Following Onoe and Durrett (2020b), we evaluate entity representation given by the box-based and vector-based models on the Coreference Arc Prediction (CAP) task (Chen et al., 2019) derived from PreCo (Chen et al., 2018).",
"This task is a binary classification problem, requiring to judge if two mention spans (either in one sentence or two sentences) are the same entity or not.",
"As in Onoe and Durrett (2020b), we obtain type predictions (a vector of probabilities associated with types) for each span and use it as an entity representation.",
"The final prediction of coreference for a pair of mentions is given by the cosine similarity between the entity type probability vectors with a threshold 0 .",
"5 .",
"The original data split provides 8k examples for each of the training, dev, and test sets.",
"We report accuracy on the CAP test set.",
"Here we report entity typing performance on Ultra-Fine Entity Typing (UFET), OntoNotes, FIGER, and BBN.",
"For each dataset, we select the best model from 5 runs with different random seeds based on the development performance.",
"UFET Table 1 shows the macro-precision, recall, and F1 scores on the UFET test set.",
"Our box-based model outperforms the vector-based model and state-of-the-art systems in terms of macro-Total Coarse Fine Ultra-Fine Model P R F1 P R F1 P R F1 P R F1 Box 52.9 39.1 45.0 71.2 82.5 76.4 50.9 55.2 53.0 45.4 24.5 31.9 Vector 53.3 36.7 43.5 71.7 79.9 75.6 51.9 48.5 50.2 43.7 22.7 29.8 Choi et al. (2018) 48.1 23.2 31.3 60.3 61.6 61.0 40.4 38.4 39.4 42.8 8.8 14.6 Label GCN (Xiong et al., 2019) 49.3 28.1 35.8 66.2 68.8 67.5 43.9 40.7 42.2 42.4 14.2 21.3 ELMo (Onoe and Durrett, 2019) 50.7 33.1 40.1 66.9 80.7 73.2 41.7 46.2 43.8 45.6 17.4 25.2 HY XLarge (Lopez and Strube, 2020) 43.4 34.2 38.2 61.4 73.9 67.1 35.7 46.6 40.4 36.5 19.9 25.7 Table 2: Macro-averaged P/R/F1 on the dev set for the entity typing task of Choi et al. (2018) comparing various systems.",
"F1.",
"9 Compared to the vector-based model, the box-based model improves primarily in macro-recall compared to macro-precision.",
"Choi et al. (2018) is a LSTM-based model using GloVe (Pennington et al., 2014).",
"On top of this model, Xiong et al. (2019) add a graph convolution layer to model type dependencies.",
"Onoe and Durrett (2019) use ELMo (Peters et al., 2018) and apply denoising to fix label inconsistency in the distantly annotated data.",
"Note that past work on this dataset has used BERT-base (Onoe and Durrett, 2019).",
"Work on other datasets has used ELMo and observed that BERT-based models have surprisingly underper-formed (Lin and Ji, 2019).",
"Some of the gain from our vector-based model can be attributed to our use of BERT-Large; however, our box model still achieves stronger performance than the corresponding vector-based version which uses the same pre-trained model.",
"Table 2 breaks down the performance into the coarse , fine , and ultra-fine classes.",
"Our box-based model consistently outperforms the vector-based model in macro-recall and F1 across the three classes.",
"The largest gap in macro-recall is in the fine class, leading to the largest gap in macro-F1 within the three classes.",
"We also list the numbers from prior work in Table",
"2. HY XLarge (Lopez and Strube, 2020), a hyperbolic model designed to learn hierarchical structure in entity types, exceeds the performance of the models with similar sizes such as Choi et al. (2018) and Xiong et al. (2019) especially in macro-recall.",
"In the ultra-fine class, both our box-based model and HY XLarge achieve higher macro-F1 compared to their vector-based counterparts.",
"One possible reason for the higher recall of our 9 We omit the test number of Lopez and Strube (2020), since they report results broken down into coarse, fine, and ultra-fine types instead of an aggregated F1 value.",
"However, based on the development results, their approach substantially underperforms the past work of Onoe and Durrett (2019) regardless.",
"model is a stronger ability to model dependencies between types.",
"Instead of failing to predict a highly correlated type, the model may be more likely to predict a complete, coherent set of types.",
"Other datasets Table 3 compares macro-F1 and micro-F1 on the OntoNotes, BBN, and FIGER test sets.",
"10 On OntoNotes, our box-based model achieves better performance than the vector-based model.",
"Zhang et al. (2018) use document-level information, Chen et al. (2020) apply a hierarchical ranking loss that assumes prior knowledge of type hierarchies, and Lin and Ji (2019) propose an ELMo-based model with an attention layer over mention spans and train their model on the augmented data from Choi et al. (2018).",
"Among the models trained only on the original OntoNotes training set, the box-based model achieves the highest macro-F1 and micro-F1.",
"The state-of-the-art system on BBN, the system of Chen et al. (2020) in the undefined setting, uses explicit knowledge of the type hierarchy.",
"This is particularly relevant on the BBN dataset, where the training data is noisy and features training points with obviously conflicting labels like person and organization , which appear systematically in the data.",
"To simulate constraints like the ones they use, we use three simple rules to modify our models' prediction: (1) dropping person if organization exists, (2) dropping location if gpe exists, and (3) replacing facility by fac , since both versions of this tag appear in the training set but only fac in the dev and test set.",
"Our box-based model and the vector-based model perform similarly and both achieve results comparable with recent systems.",
"10 Note that our hyperparameters are optimized for macro F1 on OntoNotes.",
"with state-of-the-art systems.",
"We notice that some of the test examples have inconsistent labels (e.g., /organization/sports team is present, but its supertype /organization is missing), penalizing models that predict the supertype correctly.",
"In addition, FIGER, like BBN, has systematic shifts between training and test distributions.",
"We hypothesize that our model's hyperparameters (tuned on OntoNotes only) are suboptimal.",
"The high dev performance shown in Table 4 implies that our model optimized on held-out training examples may not capture these specific shifts as well as other models whose inductive biases are better suited to this unusually mislabeled data.",
"One factor we can investigate is whether our model is able to predict type relations in a sensible, consistent fashion independent of the ground truth for a particular example .",
"For this evaluation, we investigate our model's predictions on the UFET dev set.",
"We count the number of occurrences for each subtype in 30 supertype/subtype pairs (see Table 10 in Appendix C).",
"Then, for each subtype, we count how many times its corresponding supertype is also predicted.",
"Although these supertype-subtype relations are not strictly defined in the training data, we believe they should nevertheless be exhibited by models' predictions.",
"Accuracy is given by the ratio between those counts, indicating how often the supertype was correctly picked up.",
"Table 5 lists the total and per-supertype accuracy on the supertype/subtype pairs.",
"We report the number of subtypes grouped by their supertypes to show their frequency (the Count column in Table 5).",
"Our box-based model achieves better accuracy compared to the vector-based model on all supertypes.",
"The gaps are particularly large on place and organization .",
"Note that some of the UFET training examples have inconsistent labels (e.g., a subtype team can be a supertype organization or group ), and this ambiguity potentially confuses a model during training.",
"Even in those tricky cases, the box-based model shows reasonable performance.",
"The geometry of the box space itself gives some evidence as to why this consistency would arise (see Section 5.6 for visualization of box edges).",
"Table 6 analyzes models' sensitivity to the label noise.",
"We list the UFET dev performance by models trained on the noised UFET training set.",
"When the coarse types are noised (i.e., omitting some su-pertypes), the vector-based model loses 4 .",
"8 points of macro-F1 while our box-based model only loses 1 .",
"5 points.",
"A similar trend can be seen when the fine and ultra-fine types are noised (i.e., omitting some subtypes).",
"In both cases, the vector-based model shows lower recall compared to the same model trained on the clean data, while our box-based model is more robust.",
"We also note that the vector-based model tends to overfit to the training data quickly.",
"We hypothesize that the use of boxes works as a form of regularization, since moving boxes may be harder than moving points in a space, thus being less impacted by noisy labels.",
"Following Nguyen and O'Connor (2015), we split model confidence (output probability) for each typing decision of each example into 10 bins (e.g., 0-0.1, 0.1-0.2 etc.).",
"For each bin, we compute mean confidence and empirical accuracy.",
"We show the total calibration error (lower is better) as well as the scaling and shifting constants in Table 7.",
"As the results on UFET and OntoNotes show, both box-based and vector-based entity typing models can be Box Vector Supertype Count Acc.",
"This experiment evaluates if model outputs are immediately useful in a downstream task.",
"For this task, we use the box-based and vector-based entity typing models trained on the UFET training set (i.e., we do not train models on the CAP training set).",
"Table 8 shows the test accuracy on the CAP data.",
"Our box-based model achieves slightly higher accuracy than the vector-based model, indicating that out-of-the-box entity representations obtained by the box-based model contains more useful features for the CAP task.",
"11 5.6 Box Edges To analyze how semantically related type boxes are located relative to one another in the box space, we plot the edges of the person and actor boxes along the 109 dimensions one by one.",
"Figure 3 shows how those two boxes overlap each other in the high-dimensional box space.",
"The upper plot 11 Our results are not directly comparable to those of Onoe and Durrett (2020b); we train on the training set of UFET dataset, and they train on examples from the train, dev, and test sets.",
"8: Accuracy the CAP test set et al., 2019).",
"in Figure 3 compares the person box and the actor box learned on the UFET data.",
"We can see that the edges of person contain the edges of actor in many dimensions but not all, meaning that the person box overlaps with the actor box but doesn't contain it perfectly as we might expect.",
"However, we can additionally investigate whether the actor box is effectively contained in the person for parts of the space actually used by the mention boxes.",
"The lower plot in Figure 3 compares the person box and the minimum bounding box of the intersections between the actor and the mention and context boxes obtained using the UFET dev examples where the actor type is predicted.",
"This minimum bounding box approximates the effective region within the actor box.",
"Now the edges of actor are contained in the edges of person in the most of dimensions, indicating that the person box almost contains this effective actor box.",
"Embeddings Embedding concepts/words into a high-dimensional vector space (Hinton, 1986) has a long history and has been an essential part of neural networks for language (Bengio et al., 2003; Collobert et al., 2011).",
"There is similarly a long history of rethinking the semantics of these embedding spaces, such as treating words as regions using sparse count-based vectors (Erk, 2009a,b) or dense distributed vectors (Vilnis and McCallum, 2015).",
"Order embeddings (Vendrov et al., 2016) or their probabilistic version (POE) (Lai and Hocken-maier, 2017) are one technique suited for hierarchical modeling.",
"However, OE can only handle binary entailment decisions, and POE cannot model negative correlations between types, a critical limitation in its use as a probabilistic model; these shortcomings directly led to the development of box embeddings.",
"Hyperbolic embeddings (Nickel and Kiela,",
"2017; Lopez and Strube, 2020) can also model hierarchical relationships as can hyperbolic entailment cones (Ganea et al., 2018); however, these approaches lack a probabilistic interpretation.",
"Recent work on knowledge base completion (Abboud et al., 2020) and reasoning over knowledge graphs (Ren et al., 2020) embeds relations or queries using box embeddings, but entities are still represented as vectors.",
"In contrast, our model embed both entity mentions and types as boxes.",
"Entity typing Entity typing and named entity recognition (Tjong Kim Sang and De Meulder, 2003) are old problems in NLP.",
"Recent work has focused chiefly on predicted fine-grained entity types (Ling and Weld, 2012; Gillick et al., 2014; Choi et al., 2018), as these convey significantly more information for downstream tasks.",
"As a result, there is a challenge of scaling to large type inventories, which has inspired work on type embeddings (Ren et al., 2016a,b).",
"Entity typing information has been used across a range of NLP tasks, including models for entity linking and coreference (Durrett and Klein, 2014).",
"Typing has been shown to be useful for cross-domain entity linking specifically (Gupta et al., 2017; Onoe and Durrett, 2020a).",
"It has also recently been applied to coreference resolution (Onoe and Durrett, 2020b; Khosla and Rose, 2020) and text generation (Dong et al., 2020), suggesting that it can be a useful intermediate layer even in pre-trained neural models.",
"In this paper, we investigated a box-based model for fine-grained entity typing.",
"By representing entity types in a box embedding space and projecting entity mentions into the same space, we can naturally capture the hierarchy of and correlations between entity types.",
"Our experiments showed several benefits of box embeddings over the equivalent vector-based model, including typing performance, calibration, and robustness to noise.",
"Thanks to the members of the UT TAUR lab, Pengxiang Cheng, and Eunsol Choi for helpful discussion; Tongfei Chen and Ying Lin for providing the details of experiments.",
"This work was also partially supported by NSF Grant IIS-1814522, NSF Grant SHF-1762299, and based on research in part supported by the Air Force Research Laboratory (AFRL), DARPA, for the KAIROS program under agreement number FA8750-19-2-1003, as well as University of Southern California subcontract no. 123875727 under Office of Naval Research prime contract no.",
"N660011924032.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of AFRL, DARPA, or the U.S. Government."
] | [
"abstain",
"method",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"The increasing size of generative Pre-trained Language Models (PLMs) have greatly increased the demand for model compression.",
"Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear.",
"In this paper, we compress generative PLMs by quantization.",
"We find that previous quantization methods fail on generative tasks due to the homogeneous word embeddings caused by reduced capacity, and varied distribution of weights .",
"Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules.",
"Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin.",
"With comparable performance with the full-precision models, we achieve 14.4 and 13.4 compression rates on GPT-2 and BART, respectively.",
"Transformer-based generative pre-trained language models (PLMs) show strong abilities of multitask and few-shot learning, and achieve remarkable performances on various tasks (Radford and Narasimhan, 2018; Brown et al., 2020; Lewis et al., 2020; Raffel et al., 2020; Chen et al., 2021).",
"However, they are usually expensive in terms of both computation and memory due to a large number of parameters, and the token-by-token generation process.",
"Many methods have been proposed to compress PLMs, but mostly focus on understanding tasks like sentence classification with BERT (Lan et al., 2019; Sun et al., 2020b; Jiao et al., 2020; Shen et al., 2020; Hou et al., 2020).",
"Recent works try to compress GPT-2 using tensor decomposition (Edalati et al., 2021), and knowledge distillation (Song et al., 2020), but the compression Figure 1: Performance of quantized GPT-2 with varying weight bit-widths and 8-bit activation, using different methods.",
"In this paper, we firstly explore compressing generative PLMs by quantizing the parameters from full-precision to lower bits.",
"We find that directly applying previous quantization methods designed for BERT or computer vision tasks to generative PLMs lead to poor performance.",
"Figure 1 shows that the performance drops sharply as the weight bit-width decreases.",
"To investigate the difficulty of quantizing generative PLMs, we find that the learned embeddings tend to be homogeneous and hard to distinguish due to the reduced capacity caused by quantization, while the weight distributions also vary significantly across different modules and different Transformer layers.",
"These problems are further magnified due to the nature of sequential left-to-right prediction of generative PLMs, as the quantization error will accumulate across time.",
"To alleviate the above problems, we propose a token-level contrastive distillation to contrast on tokens and make the word embedding distinguishable.",
"Besides, we propose a module-wise dynamic scaling for the quantizer to better adapt to different modules.",
"Empirical results on language modeling, next utterance prediction and summarization show that compared to the full-precision baseline, our quantized GPT and BART (abbreviated as QuantGPT and QuantBART) achieve comparable performance for 8/4-bit weight, and have only a slight drop for 2-bit weight, while being over 13 smaller.",
"compression methods on language modeling.",
"To summarize, our main contributions are: 1) We find that generative PLMs are hard to quantize due to homogeneous word embedding and varied weight distribution .",
"2) We then propose the token-level contrastive distillation and module-wise dynamic scaling, to make the word embedding more distinguishable and make quantizers adapt to different modules, respectively.",
"3) Empirical results on various tasks show the efficacy of our method.",
"In this section, we show that it is challenging to train a low-bit generative pre-trained model with conventional quantization approaches directly.",
"Before diving into details, we first review the necessary backgrounds of quantization.",
"In this paper, we apply the quantization-aware training (Courbariaux et al., 2015) to generative PLMs.",
"Specifically, denote the vectorized full-precision weight as w , each forward propagation first clips the weight by a positive clipping factor , and then quantizes the clipped weight to b -bit as w q = Q ( clip ( w , , ) / ) , (1) where Q is the quantization function that maps each entry in clip ( w , , ) / to its closest quantized value in the set of uniform discrete values { 1 , n 1 n , , 1 n , 0 , 1 n , , n 1 n , 1 } with n = 2 b 1 1 .",
"Then we compute the loss (cid:96) ( w q ) with w q .",
"During back propagation, we use the gradient with regard to the quantized weight (cid:96) ( w q ) as the Straight-Through-Estimator (Bengio et al., 2013) to update full-precision weights w due to the non-differentiability of Q ( ) .",
"A good clipping factor is expected to take the majority of full-precision weight into account via clipping, i.e. , quantizing the range where data are densely distributed to reduce quantization error.",
"To solve this problem, PACT (Choi et al., 2018) learns a parameterized clipping factor and achieves better results than setting a fixed clipping factor.",
"Instead of learning the clipping factor, LSQ (Esser et al., 2020) learns the step size /n , but requires a careful initialization and gradient update.",
"we use layer-wise quantization ( i.e. , one clipping factor for elements in each weight matrix) for all weight matrices in the Transformer layers and row-wise quantization ( i.e. , one clipping factor for each word embedding) for the embedding layer.",
"We use asymmetric uniform quantization for activations after self-attention and GeLU function whose elements are mostly positive, and symmetric uniform quantization for other activations.",
"We do not quantize layer-normalization layers, skip connections, biases due to small computational overhead.",
"We compare the following representative quantization methods including",
"(i) LAQ (Zhang et al., 2020) for BERT;",
"(ii) PACT (Choi et al., 2018) and LSQ (Esser et al., 2020)) for computer vision tasks, to generative pre-trained model, GPT-2.",
"Figure 1 shows the performance under different weight bit-widths, and the performance drops sharply as the bit-width decreases, especially for PACT and LSQ.",
"In the following, we study the potential reasons behind the difficulty of quantizing generative PLMs, by empirically investigating the properties of the word embedding and model parameters.",
"Homogeneous Word Embedding.",
"We first study the difficulty from the learned word embeddings of different models.",
"In Figure 2, we visually compare the distributions of the word embeddings of the full-precision and quantized models under the same scale.",
"As can be seen, the word embeddings of the full-precision model are scattered distinguishable, while those in previous quantization methods PACT, LSQ and LAQ learn homogeneous word embeddings which are clustered and less distinguishable, especially for PACT and LSQ.",
"We speculate this is caused by the sequential computation nature of GPT.",
"Specifically, unlike BERT which computes the representation of all tokens in parallel, GPT computes each token in left-to-right order, and the quantization error incurred in the previous tokens will pass on to future tokens, making the learning signal noisier over time, and finally less informative word embeddings.",
"A direct consequence of the homogeneous word embedding can be reflected in Figure 3. By comparing Figure 2 and Figure 3, we can find that the higher degree of homogeneity in the word embedding of a quantized model, the fewer dependencies among different tokens are kept.",
"a token-level contrastive learning to alleviate this problem.",
"Compared with PACT, LSQ and LAQ, our method not only aligns the token representations between the quantized and full-precision networks ( i.e. , diagonal boxes), but also captures the dependencies among different tokens (non-diagonal boxes).",
"More visualizations are available in Appendix C.3.",
"The non-distinguishable word embeddings and poor ability to capture contextualized dependencies also make methods like PACT and LSQ more likely to generate incorrect tokens, e.g. illogical and repeated text ( Section 4.4).",
"distribution of the weights in the full-precision model.",
"Figure 4 shows that the weight distributions of a 12-layer full-precision GPT-2 are highly skewed with outliers.",
"This causes difficulty in estimating the clipping factor of the quantizer by heuristic methods, or even by PACT which learns the through gradient descent.",
"Specifically, in PACT, the approximated gradient of only relies on the weights whose absolute values are larger than .",
"This solution ignores the effect of weights within [ , ] and depends heavily on the initialization of .",
"Figure 4 shows that an improper initialization together with the inaccurate gradient estimation of the clipping factor often make the learned of PACT too large, and can not provide fine resolution to the majority of weights within the clipping range.",
"The quantization error accumulated over time makes this problem more severe.",
"In this work, we re-parameterize the clipping factor to make the quantizer adaptive to each module in the Transformer layers, and consider both weights outside and inside the clipping range when estimating the gradient of the clipping factor.",
"As will be discussed in Section 3.2, we propose a module-wise dynamic scaling to reduce the clipping factor's sensitivity to initialization, and an improved gradient estimation that also considers the weights within [ , ] .",
"Figure 4 shows that the clipping factor learned by our method gives finer resolutions to the majority of the weights.",
"Based on the observations in Section 2.2, we propose a quantization method which utilizes token-level contrastive distillation to make the word embedding distinguishable (Section 3.1) and a module-wise dynamic scaling adjustment to learn better clipping factors (Section 3.2).",
"The proposed token-level contrastive distillation contrast among tokens instead of sequences sequence, to learn distinguishable representations for each token.",
"Inspired by Baevski et al. (2020), which uses in-utterance representation at different positions of the same utterance as negatives for speech feature learning, for each token of the quantized network, we use the representation of the same token from the full-precision teacher network as its positive, while representations of other tokens in the same sequence as negatives (Figure 5).",
"Inspired by He et al. (2020) which uses a momentum encoder for more consistent representation, we build a memory bank to store momentum token representations from the quantized network.",
"When computing the contrastive distillation loss, we load the representations of negative samples from the memory bank with cheap indexing operations.",
"Specifically, we use superscripts s and t to denote the quantized student network and full-precision teacher network, respectively.",
"Denote the lengthn input sequence of tokens as ( t 1 , t 2 , , t n ) .",
"For the i -th token t i , suppose its hidden states of the last Transformer layer from the quantized and full-precision network are linearly projected to ( h si , h ti ) R d , and q si is the smoothed representation of h si in the memory bank.",
"Denote S i as the indices of the sampled negatives for token i , the token-level contrastive distillation loss for the length-n sequence can be formulated as L cont = n (cid:88) i =1 log exp( s ( q s t i , h t t i ) / ) (cid:80) j S i exp( s ( q st i , h tt j ) / ) , (2) where s ( x , y ) = x (cid:62) y (cid:107) x (cid:107)(cid:107) y (cid:107) computes the cosine similarity, and is a fixed temperature parameter.",
"Then we update the representation of token t i in the memory bank with the moving-average of token representations from the quantized network: q st i m q st i + (1 m ) h st i , (3) where m [0 , 1) it the momentum coefficient that controls the smoothness of the token represenation.",
"Besides, we use an additional distillation loss L dist over the logits.",
"For the i -th token t i , suppose the logits of the quantized and full-precision network are z st i , z tt i R | V | , where | V | is the vocabulary size.",
"L dist is computed with the soft cross-entropy loss: L dist = n (cid:88) i =1 z tt i log( z st i ) .",
"Thus the total training loss is",
"where is a trade-off factor set as 0.1 by default.",
"Intuitively, for each token in the quantized network, L dist only encourages it to mimic its corresponding token of the teacher network, while L cont not only pulls it close to its positive, but also pushes it away from its negatives.",
"In this way, L cont helps the student to capture more information from the 4824 teacher's representation, as is also theoretically discussed in Tian et al. (2019).",
"The proposed token-level contrastive distillation is crucial to the performance, and outperforms the sequence-level counterpart (as will be shown empirically in Section 5.1.1).",
"We conjecture this is because",
"(i) token-level contrast alleviates the problem of homogeneous word embedding (Figure 2) in the low-bit quantization; and",
"(ii) similar to speech, the order of natural language is also sequential instead of spatial like images; and",
"(iii) the self-attention mechanism allows other tokens to learn representations contextualized on the studied token, and these in-sequence negatives are harder than those from in-batch sequences, allowing more efficient representation learning.",
"Based on the observation of varied weight distribution in Section 2.2, we propose a simple-yet-effective dynamic scaling according to the statistics of each module weight.",
"Specifically, instead of directly learning the original clipping factor as PACT, we turn to learn a new scaling factor , which is multiplied with the average weight magnitude (cid:107) w (cid:107) 1 n to get clipping factor : = (cid:107) w (cid:107) 1 n , (6) where (cid:107) (cid:107) 1 denotes (cid:96) 1 norm.",
"The scaling is initialized as 1, which not only eases the initialization but also ensures the initial clipping factor does not deviate far from the full-precision weights, regardless of the diversity of weight distribution.",
"Besides, we also design a more accurate gradient estimation of the scaling factor than PACT (Choi et al., 2018).",
"Previous PACT only back propagates through weights whose absolute values are larger the clipping factor ( i.e. | w | ).",
"Instead, we also consider the weights inside the clipping range ( i.e. | w | < ) as: (cid:96) = (cid:96) w q Q ( u ) (cid:107) w (cid:107) 1 n , w < (cid:96) w q [ w + Q ( u )] (cid:107) w (cid:107) 1 n , w (cid:96) w q Q ( u ) (cid:107) w (cid:107) 1 n , w > , (7) where (cid:96) is the total training loss and u = clip ( w , , ) / in Eq.",
"(1).",
"The detailed derivation can be found in Appendix A. Intuitively, the update of clipping factor should be influenced by both weights outside and inside [ , ] , since controls the quantization error of both, i.e. , a large clipping factor results in small quantization error for weights outside [ , ] , while large error for weights inside.",
"Our new estimation of the gradient of in Eq.",
"(7) considers weights both outside and inside [ , ] .",
"Additionally, the proposed scaling is less sensitive to the varied distribution of weight than PACT, since the gradient of scaling (cid:96) is proportional to the average weight magnitude (cid:107) w (cid:107) 1 n .",
"Tasks and Models.",
"In this section, we evaluate the efficacy of our proposed quantization method on three kinds of generative tasks on two kinds of generative pre-training models.",
"Specifically, we perform the proposed quantization approach on language modeling and next utterance prediction tasks on GPT-2 (Radford and Narasimhan, 2018), and abstractive summarization using BART (Lewis et al., 2020), and call the resultant models QuantGPT and QuantBART.",
"The token-level contrastive distillation is performed on the hidden states of the last layer of GPT-2 or the BART decoder.",
"More details about the datasets and model architectures can be found in Appendix B.1 and B.2.",
"Implementation Details.",
"For each downstream task with our proposed method, we first fine-tune a full-precision network using the pre-trained checkpoint from huggingface 1 for both GPT-2 and BART.",
"Then we use this fine-tuned network as the full-precision teacher network and to initialize the quantized student network.",
"We train each task with 8 V100 GPUs based on the Pytorch framework.",
"The detailed hyper-parameters for each task are available in Appendix B.3.",
"Compared Methods.",
"Since there are very few attempts to compress generative PLMs, we self-implement three baseline quantization methods PACT (Choi et al., 2018), LSQ (Esser et al., 2020) and LAQ (Hou and Kwok, 2018) for comparison.",
"Details about these methods are in Appendix B.4.",
"The task of language modeling is to predict the probability distribution over a sequence of words.",
"For language modeling, we experiment on WikiText2 (Merity et al., 2016), Penn Treebank (PTB) (Mikolov and Zweig, 2012) and WikiText103 (Mer-ity et al., 2016).",
"We use perplexity (PPL) to evaluate the performance for language modeling.",
"Comparison with the Full-precision Model.",
"From Table 1, the performance of the proposed method with 8-bit weight is comparable to the full-precision counterpart on PTB and WikiText103, while drops slightly on WikiText2.",
"A slightly more severe performance drop is observed as the bit-width decreases from 8 to 4, with a drop of around 1 PPL point on WikiText2 and WikiText103, and less than 0.1 PPL point on PTB.",
"When the bit-width of weight further goes down to 2, our method has an average of 2 PPL points drop, but achieves 14.4 model size reduction.",
"Comparison with Other Quantization Methods.",
"From Table 1, our method outperforms PACT, LSQ and LAQ for all bit-widths and tasks.",
"As the bit-width decreases from 8 to 4, the PPL of LSQ greatly increases, with the average PPL of LSQ increasing by over 5 times.",
"As the bit-width further decreases to 2, both LSQ and PACT fail on all datasets, despite their good performance on understanding tasks on BERT (Bai et al., 2021).",
"We conjecture it is because though both PACT and LSQ have learnable parameters, the accumulated quantization error of generative PLMs makes the updates of these parameters by gradient descent less stable.",
"On the other hand, the proposed module-wise dynamic scaling alleviates the problem.",
"against recent GPT-2 compression methods, including tensor decomposition method KnGPT2 (Edalati et al., 2021), as well as distillation methods DistilGPT2 and LightPAFF (Song et al., 2020).",
"From the comparison, our method outperforms the others in terms of model size and performance, even when weights are compressed to only 2 bits.",
"The task of next utterance prediction predicts the next utterance given the dialogue context.",
"It tests the language understanding ability of generative models.",
"For this task, we use a large-scale dialogue dataset, Persona-Chat (Zhang et al., 2018).",
"From Table 1, all quantization methods incur a clear performance drop compared to the full-precision baseline, even in the 8-bit setting.",
"As the quantization becomes more aggressive, i.e. , the bit-width gets smaller, the performance of PACT and LAQ decrease more significantly than ours.",
"In particular, LSQ diverges for 2-bit weight and its accuracy is only 5%, which is no better than a random guess as there are 20 classes.",
"Abstractive summarization aims at generating a terse summary that captures the main ideas of the source article.",
"We experiment on XSum (Narayan et al., 2018), whose ground-truth summarizations are highly abstractive and are challenging for many extractive strategies.",
"ROUGE 1, 2, L are used to evaluate the performance of this task.",
"Table 3 shows the results of the abstractive summarization.",
"As can be seen, our method constantly outperforms other methods again with a clear margin.",
"Example generated summarizations of different methods in Appendix C.2 show that the summaries generated by QuantBART are logical and terse, while those from PACT have repeated texts.",
"As shown in Figure 6, we ablate on how to choose negative samples in contrastive learning.",
"Specifically, we compare our method with variants of token-level contrastive learning, which select negative samples of each token from",
"(a) representations of other tokens in both the full-precision and quantized networks ( fp+quan. );",
"(b) representations of other tokens in the quantized network ( quan. only ); and",
"(c) the whole vocabulary randomly for each training iteration ( global ).",
"Besides, we compare with",
"(d) sequence-level contrastive learning by pulling together representations of the same sequence, and pushing away representations of",
"differ-(a) fp+quan.",
"ent ones from the teacher network ( in-batch ).",
"Representation of a sequence is defined as the mean of representations of all tokens in the sequence.",
"From Table 4, fp+quan. and quan. only performs worse than QuantGPT, which uses full-precision representations of other tokens as negative samples.",
"This indicates that noisy representations of tokens from the not-fully-trained quantized network may not be sufficient.",
"global performs even worse, which we conjecture is because, for one token, negative tokens chosen from the same sequence are contextually related to it and more informative than random tokens.",
"in-batch performs worse than all token-level variants, which may be because generative tasks make predictions in a token-wise manner and rely heavily in finer-grained token-wise representations.",
"Interestingly, contrary to in-batch negative sampling in computer vision (Chen et al., 2020), we find that reducing the number of negative samples by reducing the batch size from 32 to 16 slightly improves performance.",
"In Figure 7, we plot the PPL of 2-bit QuantGPT on the PTB dataset, with varying number of negative samples.",
"We plot the mean results with standard 4827 Figure 7: Effect of the number of negative samples.",
"deviations from 5 independent runs.",
"As can be seen, the performance improves and converges gradually as the number of negative samples increases.",
"Figure 7 also shows that using the moving-average representations ( q st i in Eq.",
"(3)) of negative samples in the memory bank has better performance than using the immediate representations ( h st i in Eq.",
"(3)), because of a smoother and more consistent representation of tokens.",
"In Table 5, we report the training speed and memory consumption of training the GPT-2 model on the PTB dataset with and without the proposed token-level contrastive loss.",
"Batch size is set as 4 per device, which can be increased by using GPUs with larger memory or reducing the sequence length of samples.",
"As can be seen, with the proposed token-level contrastive loss, the performance clearly improves with only slightly slower training speed and more memory consumption.",
"In Table 6, we compare the different representations to perform the contrastive loss.",
"The decoder-last( resp. decoder-first) denotes performing the proposed token-level contrastive loss on the hidden states from the last decoder layer (resp. first decoder layer) followed by a linear transformation.",
"From Table 6, decoder-last performs better than decoder-first.",
"A possible reason is that the hidden states of the last decoder blocks contain rich information from all previous layers (Xiong et al., 2020).",
"Since the experiments of abstractive summarization are conducted on BART, which has both encoder and decoder layers, we also study the contrastive loss on the encoder-last and encoder-first.",
"In the ablation on the encoder, the contrastive loss L cont are computed on the source input (arti-cles), instead of target input (summaries).",
"From Table 6, decoder-last also has better ROUGE 1, 2, L values than other counterparts.",
"Figure 8 shows the learned scaling of different modules in the 2-bit GPT-2 model.",
"As can be seen, the scalings of different modules vary a lot, verifying the need for module-wise dynamic scaling.",
"In addition, we investigate the effect of the proposed dynamic scaling and the new estimation of the gradient in Eq.",
"(7) with two variants: 1) L dist only which removes the token-level contrastive learning; and 2) Ours with PACT which removes the contrastive learning, and estimates the gradient with PACT which only considers the weights whose absolute values are larger than the clipping factor .",
"As shown in Table 7, the performance gets worse without contrastive learning to learn the distinguishable representations of tokens.",
"The performance drops significantly when using PACT to estimate the gradient of the proposed scaling, especially for the WikiText103 dataset, verifying the efficacy of the new gradient estimation.",
"Compression of Generative Pre-trained Language Models.",
"Some early explorations compress the generative pre-trained language models.",
"KnGPT2 (Edalati et al., 2021) applies the Kronecker decomposition to compress the GPT.",
"DistilGPT2 2 distills a 12-layer GPT-2 to a 6-layer one, which is twice as fast during inference.",
"LightPAFF (Song et al., 2020) proposes a distillation approach that the training loss is a combination of a maximum likelihood loss of the student model, and the KL divergence between the output of teacher and student models.",
"SpAtten (Wang et al., 2021) proposes a sparse model with algorithm and architecture co-design, which removes uninformative tokens and attention heads.",
"Compared with these methods, we not only study the difficulties of compression from the properties of generative tasks, 2 https://transformer.huggingface.co/ model/distil-gpt2 4828 WikiText2 PTB WikiText103 Persona-Chat XSum Metric PPL ( ) PPL ( ) PPL ( ) Acc( % ) ( ) R1 ( ) R2 ( ) RL ( ) decoder-last 17.30 16.12 16.98 74.78 39.15 16.72 31.72 decoder-first 18.02 16.61 17.25 74.75 39.11 16.70 31.62 encoder-last --38.91 16.72 31.67 encoder-first --38.87 16.70 31.56 Table 6: Representations for the contrastive loss L cont in 2-bit setting.",
"Quantization of Pre-trained Language Models.",
"Quantization compresses a model by representing the 32-bit floating-point parameter with a low-bit representation, and has been widely used in various domains as it does not require designing a new model architecture.",
"There have been many attempts to quantize task-specific BERT models (Zafrir et al., 2019; Shen et al., 2020; Zadeh et al., 2020) with only negligible performance drop on natural language understanding tasks.",
"Recent works (Zhang et al., 2020; Bai et al., 2021) even push the weight bit-width down to as low as 1-bit.",
"Despite the success of these approaches for BERT models, attempts to quantize generative PLMs are scarce, and the underlying difficulty remains unclear.",
"Contrastive Learning.",
"Contrastive learning aims at pushing the representations of similar samples together while pulling those of dissimilar ones apart.",
"and is widely used for large-scale self-supervised learning in various domains (Chen et al., 2020; Sun et al., 2020a; Baevski et al., 2020; Huang et al., 2022), and multi-modal learning (Radford et al., 2021; Jia et al., 2021).",
"SimCLR (Chen et al., 2020) directly uses other in-batch samples as negatives, and sufficient large batch size is required to work well.",
"MoCo (He et al., 2020) maintains a large number of negative samples in a queue and uses a moving average key encoder to improve consistency.",
"Contrastive learning without negative samples is also proposed in BYOL (Grill et al., 2020) and SimSiam (Chen and He, 2021).",
"Contrastive representation distillation (Tian et al., 2019) distills the knowledge from the teacher network to the student network by maximizing the mutual information between them.",
"The closest work with our token-level contrastive distillation is Wav2vec 2.0 (Baevski et al., 2020), which use in-utterance representations at different positions as negatives in speech learning.",
"Besides the difference in the modality and tasks, our method also differs from theirs in (1) Model: We quantize the model parameters and activations while they do not; (2) Representation: For each sample, we use the output of the full-precision and the quantized networks as its two views, while they use the quantized and the contextualized representation.",
"(3) Loss: We calculate loss over all tokens in an auto-regressive manner, while they only calculate over the masked tokens non-autoregressively.",
"This paper studies low-bit quantization of generative PLMs.",
"We find that the difficulty of quantizing generative PLMs lies in homogeneous word embedding and varied distribution of weights.",
"To alleviate the two problems, we propose token-level contrastive learning to learn more distinguishable token emebeddings, as well as a module-dependent dynamic scaling for more accurate quantization.",
"Extensive experiments on language modeling, next utterance prediction and abstractive summarization demonstrate the efficacy of our proposed method.",
"We hope our work sheds a light on the compression of generative PLMs in future exploration.",
"This work is supported in part by the General Research Fund (GRF) project 17206020, and in part by ACCESS, AI Chip Center for Emerging Smart Systems, Hong Kong SAR."
] | [
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"abstain",
"objective",
"objective",
"result",
"result",
"objective",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"other"
] |
[
"NLP is currently dominated by language models like RoBERTa which are pretrained on billions of words.",
"But what exact knowledge or skills do Transformer LMs learn from large-scale pretraining that they cannot learn from less data?",
"To explore this question, we adopt five styles of evaluation: classifier probing, information-theoretic probing, unsupervised relative acceptability judgments, unsupervised language model knowledge probing, and fine-tuning on NLU tasks.",
"We then draw learning curves that track the growth of these different measures of model ability with respect to pretraining data volume using the MiniBERTas, a group of RoBERTa models pretrained on 1M, 10M, 100M and 1B words.",
"We find that these LMs require only about 10M to 100M words to learn to reliably encode most syntactic and semantic features we test.",
"They need a much larger quantity of data in order to acquire enough commonsense knowledge and other skills required to master typical downstream NLU tasks.",
"The results suggest that, while the ability to encode linguistic features is almost certainly necessary for language understanding, it is likely that other, unidentified, forms of knowledge are the ma-jor drivers of recent improvements in language understanding among large pretrained models.",
"Pretrained language models (LMs) like BERT and RoBERTa have become ubiquitous in NLP.",
"New models require massive datasets of tens or even hundreds of billions of words (Brown et al., 2020) to improve on existing models on language understanding benchmarks like GLUE (Wang et al., 2018).",
"Much recent work has used probing methods to evaluate what these models do and do not *Equal Contribution N o n e 1 M 1 0 M 1 0 0 M 1 B 3 0 B 0.0 0.2 0.4 0.6 0.8 1.0 Classifier Probing (Edge Probing) MDL Reflected (Edge Probing) BLiMP LAMA SuperGLUE R e l a t i v e P e r f o r m a n c e Pretraining Dataset Size Figure 1: Overall learning curves for the five evaluation methods.",
"learn (Belinkov and Glass, 2019; Tenney et al., 2019b; Rogers et al., 2020; Ettinger, 2020).",
"Since most of these works only focus on models pretrained on a fixed data volume (usually billions of words), many interesting questions regarding the effect of the amount of pretraining data remain unanswered: What have data-rich models learned that makes them so effective on downstream tasks?",
"How much pretraining data is required for LMs to learn different grammatical features and linguistic phenomena?",
"Which of these skills do we expect to improve when we scale pretraining past 30 billion words?",
"Which aspects of grammar can be learned from data volumes on par with the input to human learners, around 10M to 100M words (Hart and Risley)?",
"With these questions in mind, we evaluate and probe the MiniBERTas (Warstadt et al., 2020b), a group of RoBERTa models pretrained on 1M, 10M, 100M, and 1B words, and RoBERTa BASE (Liu et al., 2019) pretrained on about 30B words, using five methods: First we use standard classifier probing on the edge probing suite of NLP tasks (Tenney et al., 2019b) to measure the quality of the syntactic and semantic features that can be extracted by a downstream classifier with each level of pretraining.",
"Second, we apply minimum description length (MDL) probing (Voita and Titov, 2020) to the edge probing suite, with the goal of quantifying the accessibility of these features.",
"Third, we test the models' knowledge of various syntactic phenomena using unsupervised acceptability judgments on the BLiMP suite (Warstadt et al., 2020a).",
"Fourth, we probe the models' world knowledge and commonsense knowledge using unsupervised language model knowledge probing with the LAMA suite (Petroni et al., 2019).",
"Finally, we fine-tune the models on five tasks from SuperGLUE (Wang et al., 2019) to measure their ability to solve conventional NLU tasks.",
"For each evaluation method, we fit an exponential learning curve to the results as a function of the amount of pretraining data, shown in Figure",
"1. We have two main findings: First, the results of classifier probing, MDL probing, and unsupervised relative acceptability judgement (BLiMP) show that the linguistic knowledge of models pretrained on 100M words and 30B words is similar, as is the description length of linguistic features.",
"Second, RoBERTa requires billions of words of pretraining data to effectively acquire factual knowledge and to make substantial improvements in performance on dowstream NLU tasks.",
"From these results, we conclude that there are skills critical to solving downstream NLU tasks that LMs can only acquire with billions of words of pretraining data.",
"Future work will likely need to look beyond core linguistic knowledge if we are to better understand and advance the abilities of large language models.",
"We probe the MiniBERTas, a set of 12 RoBERTa models pretrained from scratch by Warstadt et al. (2020b) on 1M, 10M, 100M, and 1B words, the publicly available RoBERTa BASE (Liu et al., 2019),",
"which is pretrained on about 30B words, 1 and 3 RoBERTa BASE models with randomly initialized parameters.",
"Descriptions of the five evaluation methods appear in the subsequent sections.",
"2 In each experiment, we test all 16 models on each task involved.",
"To show the overall trend of improvement, we use non-linear least squares to fit an exponential learning curve to the results.",
"3 We upsample RoBERTa BASE results in regression in order to have an equal number of results for each data quantity.",
"We use a four-parameter exponential learning curve used to capture diminishing improvement in performance as a function of the number of practice trials (Heathcote et al., 2000; Leibowitz et al., 2010): E ( P n ) = P ( P P 0 ) e n where E ( P n ) is the expected performance after n trials, 4 P 0 and P and are the initial and asymptotic performance, and and are coefficients to translate and dilate the curve in the log domain.",
"We plot the results in a figure for each task, where the y -axis is the score and the x -axis is the amount of pretraining data.",
"5 For some plots, we use min-max normalization to adjust the results into the range of [0, 1], where 0 and 1 are the inferred values of P 0 and P , respectively.",
"6 3 Classifier Probing We use the widely-adopted probing approach of Ettinger et al. (2016), Adi et al. (2017), and others which we call classifier probing to test the extent to which linguistic features like part-of-speech and coreference are encoded in the frozen model representations.",
"We adopt the ten probing tasks in the 1 The miniBERTas' training data is randomly sampled from Wikipedia and Smashwords in a ratio of 3:1.",
"These two datasets are what Devlin et al. (2019) use to pretrain BERT and represent a subset of the data used to pretrain RoBERTa.",
"RoBERTa BASE 's training data also includes of news and web data in addition to Wikipedia and Smashwords.",
"Warstadt et al. ran pretraining 25 times with varying hyperparameter values and model sizes for the 1M-, 10M-, and 100M-word settings, and 10 times for the 1B-word setting.",
"All the models were pretrained with early stopping on validation set perplexity.",
"For each dataset size, they released the three models with the lowest validation set perplexity, yielding 12 models in total.",
"2 Code: https://github.com/nyu-mll/ pretraining-learning-curves 3 We use SciPy's curve fit implementation.",
"Classifier probing has recently come under scrutiny.",
"Hewitt and Liang (2019) and Voita and Titov (2020) caution that the results depend on the complexity of the probe, and so do not precisely reveal the quality of the representations.",
"However, 7 Task data sources: Part-of-Speech, Constituents, Entities, SRL, and OntoNotes coref.",
"from Weischedel et al. (2013), Dependencies from Silveira et al. (2014), Sem.",
"Proto Role 1 from Teichert et al. (2017), Sem.",
"Proto Role 2 from Rudinger et al. (2018), Relations (SemEval) from Hendrickx et al. (2010), and Winograd coref.",
"from Rahman and Ng (2012); White et al. (2017).",
"we see two advantages to this method: First, the downstream classifier setting and F1 evaluation metric make these experiments easier to interpret in the context of earlier results than results from relatively novel probing metrics like minimum description length.",
"Second, we focus on relative differences between models rather than absolute performance, and include a randomly initialized baseline model in the comparison.",
"When the model representations are random, the probe's performance reflects the probe's own ability to solve the target task.",
"Therefore, any improvements over this baseline value are due to the representation rather than the probe itself.",
"Task formulation and training Following Tenney et al., we use attention pooling to generate representation(s) of the token span(s) involved in the task and train an MLP that predicts whether a given label correctly describes the input span(s).",
"We adopt the mix representation approach described in the paper.",
"To train the probes, we use the same hyperparameters used in Tenney et al. and tune the batch size and learning rate.",
"8 Results We plot results in Figure",
"2. From the single-task curves we conclude that most of the 8 We randomly sample 5 pairs from the range { 8 , 16 , 32 , 64 } { 5 e 5 , 1 e 4 , 5 e 4 } .",
"feature learning occurs with < 100M words of pretraining data.",
"Based on the best-fit curve, we can estimate that 90% of the attainable improvements in overall performance are achieved with < 20M words.",
"Most plots show broadly similar learning curves, which rise sharply with less than 1M words of pretraining data, reach the point of fastest growth (in the log domain) around 1M words, and are nearly saturated with 100M words.",
"The most notable exception to this pattern is the Winograd task, which only rises significantly between 1B and 30B words of pretraining data.",
"9 As the Winograd task is designed to test commonsense knowledge and reasoning, the results suggest that these features require more data to encode than syntactic and semantic ones, with the caveat that the dataset is smaller than the other edge probing tasks, and results on Winograd tasks are highly sensitive to factors such as task formulation (Liu et al., 2020).",
"We observe some general differences between different types of tasks.",
"Figure 3 shows the aggregated learning curves of syntactic, semantic, and commonsense tasks.",
"The syntactic learning curve rises slightly earlier than the semantic one and 90% of the improvements in syntactic learning can be made with about 10M words, while the semantic curve still rises slightly after 100M.",
"This is not surprising, as semantic computation is generally thought to depend on syntactic representa-9 These results are also noisier, similar to what Tenney et al. (2019b) find.",
"tions (Heim and Kratzer, 1998).",
"The commonsense learning curve (for Winograd coref. only) rises far later, and is projected to continue to rise long after syntactic and semantic features stop improving.",
"In this experiment, we study the MiniBERTas with MDL probing (Voita and Titov, 2020), with the goal of revealing not only the total amount of feature information extracted by the probe, but also the effort taken by the probe to extract the features.",
"MDL measures the minimum number of bits needed to transmit the labels for a given task given that both the sender and the receiver have access to the pretrained model's encoding of the data.",
"A well-trained decoder model can help extract labels from the representations and thus reduce the number of bits needed to transmit the labels.",
"Since the model itself will also need to be transmitted, the total description length is a sum of two terms: The data codelength is the number of bits needed to transmit the labels assuming the receiver has the trained decoder model, i.e. the cross-entropy loss of the decoder.",
"The model codelength is the number of bits needed to transmit the decoder parameters.",
"We follow Voita and Titov's online code estimation of MDL, where the decoder is implicitly transmitted.",
"As in Section 3, we train decoders using the same hyperparameter settings and task definitions as Tenney et al. (2019b).",
"Results We plot the online code results in Figure",
"4. The overall codelength shows a similar trend to edge probing: Most of the reduction in feature codelength is achieved with fewer than 100M words.",
"MDL for syntactic features decreases even sooner.",
"Results for Winograd are idiosyncratic, probably due to the failure of the probes to learn the task.",
"The changes in model codelength and data codelength are shown on the bar plots in Figure",
"4. We compute the data codelength following Voita and Titov (2020) using the training set loss of a classifier trained on the entire training set, and the model codelength is the total codelength minus the data codelength.",
"The monotonically decreasing data codelength simply reflects the fact that the more data rich RoBERTa models have smaller loss.",
"When it comes to the model codelength, however, we generally observe the global minimum for the randomly initialized models (i.e., at None).",
"This is expected, and intuitively reflects the fact that a decoder trained on random representations would provide little information about the labels, and so it would be optimal to transmit a very simple decoder.",
"On many tasks, the model codelength starts to decrease when the pretraining data volume exceeds a certain amount.",
"However, this trend is not consistent across tasks and the effect is relatively small.",
"We use the BLiMP benchmark (Warstadt et al., 2020a) to test models' knowledge of individual grammatical phenomena in English.",
"BLiMP is a challenge set of 67 tasks, each containing 1000 minimal pairs of sentences that highlight a particular morphological, syntactic, or semantic phenomena.",
"Minimal pairs in BLiMP consist of two sentences that differ only by a single edit, but contrast in grammatical acceptability.",
"A language model classifies a minimal pair correctly if it assigns a higher probability to the acceptable sentence.",
"Since RoBERTa is a masked language model (MLM), we measure pseudo log-likelihood (Wang and Cho, 2019) to score sentences (Salazar et al., 2020).",
"Results We plot learning curves for BLiMP in Figure",
"5. Warstadt et al. organize the 67 tasks in 10 Unlike us, Voita and Titov redefine the edge probing tasks as standard multi-class classification tasks.",
"BLiMP into 12 categories based on the phenomena tested and for each category we plot the average accuracy for the tasks in the category.",
"We do not normalize results in this plot.",
"For the no-data baseline, we plot chance accuracy of 50% rather than making empirical measurements from random RoBERTa models.",
"We find the greatest improvement in overall BLiMP performance between 1M and 100M words of pretraining data.",
"With 100M words, sensitivity to contrasts in acceptability overall is within 9 accuracy points of humans, and improves only 6 points with additional data.",
"This shows that substantial knowledge of many grammatical phenomena can be acquired from 100M words of raw text.",
"We also observe significant variation in how much data is needed to learn different phenomena.",
"We see the steepest learning curves on agreement phenomena, with nearly all improvements occurring between 1M and 10M words.",
"For phenomena involving wh -dependencies, i.e. filler-gap dependencies and island effects, we observe shallow and delayed learning curves with 90% of possible improvements occurring between 1M and 100M words.",
"The relative difficulty of wh -dependencies can probably be ascribed to the long-distance na-ture and lower frequency of those phenomena.",
"We also observe that the phenomena tested in the quantifiers category are never effectively learned, even by RoBERTa BASE .",
"These phenomena include subtle semantic contrastsfor example Nobody ate { more than, *at least } two cookies which may involve difficult-to-learn pragmatic knowledge (Co-hen and Krifka, 2014).",
"LAMA is a test suite introduced by Petroni et al. to test LMs' factual knowledge.",
"It contains over 50,000 cloze statements converted from subject-relation-object triples or question-answer pairs extracted from four datasets: GoogleRE, 11 TRE-x (El-sahar et al., 2018), ConceptNet (Speer and Havasi, 2012), and SQUAD (Rajpurkar et al., 2016).",
"The Google-RE and T-REx tasks are each divided into three sub-tasks.",
"Results We plot the results on LAMA in Figure",
"6. The fastest growing point of most curves appears after 100M words.",
"This relatively large quantity of 11 source: https://code.google.com/archive/ p/relation-extraction-corpus/ .",
"data may be needed for the model to be exposed to relevant factual knowledge.",
"The learning curves for many LAMA tasks do not show clear signs of saturation in the range of 0 to 30B words, suggesting further improvements are likely with much larger data quantities.",
"Among LAMA tasks, ConceptNet most directly tests commonsense knowledge.",
"The steep slope of the ConceptNet curve between 100M and 30B words of pretraining data and the large precision jump ( > 0 . 05 ) from 1B to 30B show that increasing the pretraining data to over 1B words significantly improve the LM's commonsense knowledge, which explains the shape of the Winograd coref.",
"learning curve in Section 3.",
"SuperGLUE is a benchmark suite of eight classification-based language-understanding tasks (Wang et al., 2019).",
"We test each MiniBERTa on five SuperGLUE tasks on which we expect to see significant variation at these scales.",
"12 The hyperpa-12 Task data sources: CB from De Marneffe et al. (2019), BoolQ from Clark et al. (2019), COPA from Roemmele et al. (2011), WiC from Pilehvar and Camacho-Collados (2019); Miller (1995); Schuler (2005), and RTE from Dagan et al. rameter search range used for each task is described in the appendix.",
"Results We plot the results on the selected SuperGLUE tasks in Figure 7.",
"Improvements in SuperGLUE performance require a relatively large volume of pretraining data.",
"For most tasks, the point of fastest improvement in our interpolated curve occurs with more than 1B words.",
"None of the tasks (with the possible exception of Commitment-Bank) show any significant sign of saturation at 30B words.",
"This suggests that some key NLU skills are not learnt with fewer than billions of words, and that models are likely to continue improving substantially on these tasks given 10 to 100 times more pretraining data.",
"Figure 1 plots the overall learning curves for these five methods together.",
"The most striking result is that good NLU task performance requires far more data than achieving good representations for linguistic features.",
"Classifier probing, MDL (2006); Bar Haim et al. (2006); Giampiccolo et al. (2007); Bentivogli et al. (2009).",
"probing, and acceptability judgment performance all improve rapidly between 1M and 10M words and show little improvement beyond 100M words, while performance on the NLU tasks in SuperGLUE appears to improve most rapidly with over 1B words and will likely continue improving at larger data scales.",
"While the linguistic features we test are undoubtedly needed to robustly solve most NLU tasks, a model that can extract and encode a large proportion of these features may still perform poorly on SuperGLUE.",
"What drives improvements in NLU task performance at larger data scales remains an open question.",
"Factual knowledge may play a large role in explaining SuperGLUE performance.",
"This hypothesis is backed up by results from the Winograd edge-probing task (Figure 2) and the LAMA tasks (Figure 6), which suggest that most of the improvements in the model's world and commonsense knowledge are made with over 100M words.",
"However, the LAMA learning curve shows signs of slowing between 1B and 30B words, the SuperGLUE curve does not.",
"Another possible explanation is that linguistic features encoded by a model may not be easily accessible during fine-turning.",
"Warstadt et al. (2020b) found that RoBERTa can learn to reliably extract many linguistic features with little pretraining data, but requires billions of words of pretraining data before it uses those features preferentially when generalizing.",
"In light of Warstadt et",
"al.'s findings, we had initially hypothesized that feature accessibility as measured by MDL might show a shallower or later learning curve than standard classifier probing.",
"13 13 Warstadt et",
"al.'s experiments are quite different to ours.",
"However, we do not totally rule out the possibility that linguistic feature accessibility continues to improve with massive pretraining sets.",
"There are potential modifications to Voita and Titov's approach that could more faithfully estimate feature accessibility.",
"First, although RoBERTa is actually fine-tuned in most applications, we and Voita and Titov measure MDL taking the outputs of the frozen RoBERTa model as input to a trainable MLP decoder.",
"It may be more relevant to measure MDL by fine-tuning the entire model (Lovering et al., 2021).",
"Second, MDL actually estimates the information content of a particular dataset, rather than the feature itself.",
"Whitney et al. (2020) propose an alternative to MDL that measures feature complexity in a way that does not depend on the size of the dataset.",
"Probing neural network representations has been an active area of research in recent years (Belinkov and Glass, 2019; Rogers et al., 2020).",
"With the advent of large pretrained Transformers like BERT (Devlin et al., 2019), numerous papers have used classifier probing methods to attempt to locate linguistic features in learned representations with striking positive results (Tenney et al., 2019b; Hewitt and Manning, 2019).",
"However, another thread has found problems with many probing methods: Classifier probes can learn too much from training data (Hewitt and Liang, 2019) and can fail to distinguish features that are extractable from features that are actually used when generalizing on downstream tasks (Voita and Titov, 2020; Pimentel et al., 2020; Elazar et al., 2020).",
"Moreover, different probing methods often yield contradictory results (Warstadt et al., 2019).",
"There have also been a few earlier studies investigating the relationship between pretraining data volume and linguistic knowledge in language models.",
"Studies of unsupervised acceptability judgments find fairly consistent evidence of rapid improvements in linguistic knowledge up to about 10M words of pretraining data, after which improvements slow down for most phenomena.",
"van They measure RoBERTa's preference for linguistic features over surface features during fine-tuning on ambiguous classification tasks.",
"Schijndel et al. (2019) find large improvements in knowledge of subject-verb agreement and reflexive binding up to 10M words, and little improvement between 10M and 80M words.",
"Hu et al. (2020) find that GPT-2 trained on 42M words performs roughly as well on a syntax benchmark as a similar model trained on 100 times that amount.",
"Other studies have investigated how one model's linguistic knowledge changes during the training process, as a function of the number of updates (Saphra and Lopez, 2019; Chiang et al., 2020).",
"Raffel et al. (2020) also investigate how performance on SuperGLUE (and other downstream tasks) improves with pretraining dataset size between about 8M and 34B tokens.",
"In contrast to our findings, they find that models with around 500M tokens of pretraining data can perform similarly on downstream tasks to models with 34B words.",
"However, there are many differences in our settings that may lead to this divergence.",
"For example, they pretrain for a fixed number of iterations (total-ing 34B token updates), whereas the MiniBERTas we use were pretrained with early stopping.",
"They also use prefix prompts in their task formulations, and adopt an encoder-decoder architecture and thus their model has roughly twice the number of parameters of the largest model we evaluate.",
"There is also some recent work that investigates the effect of pretraining data size of other languages.",
"Micheli et al. (2020) pretrain BERT-based language models on 10MB, 100MB, 500MB, 1GB, 2GB, and 4GB of French text and test them on a question answering task.",
"They find that the French MLM pretrained on 100MB of raw text has similar performance to the ones pretrained on larger datasets on the task, and that corpus-specific self-supervised learning does not make a significant difference.",
"Martin et al. (2020) also show that French MLMs can already learn a lot from small-scale pretraining.",
"Concurrent work (Liu et al., 2021) probes RoBERTa models pretrained on different numbers of iterations using a set of probing tasks similar to ours.",
"They find that linguistic abilities are acquired fastest, world and commonsense knowledge learning takes more iterations, and reasoning abilities are never stably acquired.",
"Both studies show that linguistic knowledge is easier to learn than factual knowledge.",
"We track several aspects of RoBERTa's ability as pretraining data increases.",
"We find that ability in syntax and semantics largely saturates after only 10M to 100M words of pretraining dataon par with the data available to human learnerswhile learning factual knowledge requires much more data.",
"We also find that scaling pretraining data size past billions of words significantly improves the NLU performance, though we cannot fully explain what abilities drive this improvement.",
"Answering this question could be a stepping stone to more data-efficient models.",
"This material is based upon work supported by the National Science Foundation under grant no. 1850208.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily re-flect the views of the National Science Foundation.",
"We would like to thank Udit Arora, Jason Phang, Clara Vania, and ML 2 for feedback on an earlier draft.",
"Thanks also to Kyunghyun Cho, Tal Linzen, Grusha Prasad, and Emin Orhan for suggestions regarding the exponential learning curve, and to Elena Voita, Ian Tenney, and Haokun Liu for the discussion about the implementation of the probing methods.",
"There are several ethical reasons to study LMs with limited pretraining data.",
"Training massive LMs like RoBERTa from scratch comes with non-trivial environmental costs (Strubell et al., 2019), and they are expensive to train, limiting contributions to pretraining research from scientists in lower-resource contexts.",
"By evaluating LMs with limited pretraining, we demonstrate that smaller LMs match massive ones in performance in many respects.",
"We also identify a clear gap in our knowledge regarding why extensive pretraining is effective.",
"Answering this question could lead to more efficient pretraining and ultimately reduce environmental costs and make NLP more accessible.",
"On the other hand, there is a danger that our work, by projecting substantial gains in model performance by increasing pretraining size, could legitimize and encourage the trend of ever growing datasets.",
"Massive LMs also replicate social biases present in training data (Nangia et al., 2020).",
"By establishing benchmarks for smaller LMs and highlighting their efficacy for certain purposes, we hope to spur future work that takes advantage of smaller pretraining datasets to carefully curate the data distribution, as advocated by Bender et al. (2021), in order to build LMs that do less to reproduce harmful biases and are more inclusive of minority dialects."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"The recent proliferation of fake news has triggered a number of responses, most notably the emergence of several manual fact-checking initiatives.",
"As a result and over time, a large number of fact-checked claims have been accumulated, which increases the likelihood that a new claim in social media or a new statement by a politician might have already been fact-checked by some trusted fact-checking organization, as viral claims often come back after a while in social media, and politicians like to repeat their favorite statements, true or false, over and over again.",
"As manual fact-checking is very time-consuming (and fully automatic fact-checking has credibility issues), it is important to try to save this effort and to avoid wasting time on claims that have already been fact-checked.",
"Interestingly, despite the importance of the task, it has been largely ignored by the research community so far.",
"Here, we aim to bridge this gap.",
"In particular, we formulate the task and we discuss how it relates to, but also differs from, previous work.",
"We further create a specialized dataset, which we release to the research community.",
"Finally, we present learning-to-rank experiments that demonstrate sizable improvements over state-of-the-art retrieval and textual similarity approaches.",
"The year 2016 was marked by massive disinformation campaigns related to Brexit and the US Presidential Elections.",
"While false statements are not a new phenomenon, e.g., yellow press and tabloids have been around for decades, this time things were notably different in terms of scale and effectiveness thanks to social media platforms, which provided both a medium to reach millions of users and an easy way to micro-target specific narrow groups of voters based on precise geographical, demographic, psychological, and/or political profiling.",
"Governments, international organizations, tech companies, media, journalists, and regular users launched a number of initiatives to limit the impact of the newly emerging large-scale weaponization of disinformation 1 online.",
"Notably, this included manual fact-checking initiatives, which aimed at debunking various false claims, with the hope to limit its impact, but also to educate the public that not all claims online are true.",
"Over time, the number of such initiatives grew substantially, e.g., at the time of writing, the Duke Reporters' Lab lists 237 active fact-checking organizations plus another 92 inactive.",
"2 While some organizations debunked just a couple of hundred claims, others such as Politifact, 3 FactCheck.org, 4 Snopes, 5 and Full Fact 6 have fact-checked thousands or even tens of thousands of claims.",
"The value of these collections of resources has been recognized in the research community, and they have been used to train systems to perform automatic fact-checking (Popat et al., 2017; Wang, 2017; Zlatkova et al., 2019) or to detect check-worthy claims in political debates (Hassan et al., 2015; Gencheva et al., 2017; Patwari et al., 2017; Vasileva et al., 2019).",
"There have also been datasets that combine claims from multiple fact-checking organizations (Augenstein et al., 2019), again with the aim of performing automatic fact-checking.",
"1 In the public discourse, the problem is generally known as fake news , a term that was declared Word of the Year 2017 by Collins dictionary.",
"Despite its popularity, it remains a confusing term, with no generally agreed upon definition.",
"It is also misleading as it puts emphasis on",
"(a) the claim being false, while generally ignoring",
"(b) its intention to do harm.",
"In contrast, the term disinformation covers both aspects",
"(a) and",
"(b), and it is generally preferred at the EU level.",
"2 http://reporterslab.org/ fact-checking/ 3 http://www.politifact.com/ 4 http://www.factcheck.org/ 5 http://www.snopes.com/ 6 http://fullfact.org/ Figure 1: A general information verification pipeline.",
"It has been argued that checking against a database of previously fact-checked claims should be an integral step of an end-to-end automated fact-checking pipeline (Hassan et al., 2017).",
"This is illustrated in Figure 1, which shows the general steps of such a pipeline (Elsayed et al., 2019): ( i ) assess the check-worthiness of the claim (which could come from social media, from a political debate, etc.), ( ii ) check whether a similar claim has been previously fact-checked (the task we focus on here), ( iii ) retrieve evidence (from the Web, from social media, from Wikipedia, from a knowledge base, etc.), and ( iv ) assess the factuality of the claim.",
"From a fact-checkers' point of view, the abundance of previously fact-checked claims increases the likelihood that the next claim that needs to be checked would have been fact-checked already by some trusted organization.",
"Indeed, viral claims often come back after a while in social media, and politicians are known to repeat the same claims over and over again.",
"7 Thus, before spending hours fact-checking a claim manually, it is worth first making sure that nobody has done it already.",
"On another point, manual fact-checking often comes too late.",
"A study has shown that fake news spreads six times faster than real news (Vosoughi et al., 2018).",
"Another study has indicated that over 50% of the spread of some viral claims happens within the first ten minutes of their posting on social media (Zaman et al., 2014).",
"At the same time, detecting that a new viral claim has already been fact-checked can be done automatically and very quickly, thus allowing for a timely action that can limit the spread and the potential malicious impact.",
"From a journalistic perspective, the ability to check quickly whether a claim has been previously fact-checked could be revolutionizing as it would allow putting politicians on the spot in real time, e.g., during a live interview.",
"In such a scenario, automatic fact-checking would be of limited utility as, given the current state of technology, it does not offer enough credibility in the eyes of a journalist.",
"Interestingly, despite the importance of the task of detecting whether a claim has been fact-checked in the past, it has been largely ignored by the research community.",
"Here, we aim to bridge this gap.",
"Our contributions can be summarized as follows: We formulate the task and we discuss how it relates to, but differs from, previous work.",
"We create a specialized dataset, which we release to the research community.",
"8 Unlike previous work in fact-checking, which used normalized claims from fact-checking datasets, we work with naturally occurring claims, e.g., in debates or in social media.",
"We propose a learning-to-rank model that achieves sizable improvements over state-of-the-art retrieval and textual similarity models.",
"The remainder of this paper is organized as follows: Section 2 discusses related work, Section 3 introduces the task, Section 4 presents the dataset, Section 5 discusses the evaluation measures, Section 6 presents the models we experiment with, Section 7 described our experiments, and Section 8 concludes and discusses future work.",
"To the best of our knowledge, the task of detecting whether a claim has been previously fact-checked was not addressed before.",
"Hassan et al. (2017) mentioned it as an integral step of their end-to-end automated fact-checking pipeline, but there was very little detail provided about this component and it was not evaluated.",
"In an industrial setting, Google has developed Fact Check Explorer , 9 which is an exploration tool that allows users to search a number of fact-checking websites (those that use ClaimReview from schema.org 10 ) for the mentions of a topic, a person, etc.",
"However, the tool cannot handle a complex claim, as it runs Google search, which is not optimized for semantic matching of long claims.",
"While this might change in the future, as there have been reports that Google has started using BERT in its search, at the time of writing, the tool could not handle a long claim as an input.",
"8 Data and code are available at the following URL: https://github.com/sshaar/ That-is-a-Known-Lie 9 http://toolbox.google.com/factcheck/ explorer 10 http://schema.org/ClaimReview A very similar work is the ClaimsKG dataset and system (Tchechmedjiev et al., 2019), which includes 28K claims from multiple sources, organized into a knowledge graph (KG).",
"The system can perform data exploration, e.g., it can find all claims that contain a certain named entity or keyphrase.",
"In contrast, we are interested in detecting whether a claim was previously fact-checked.",
"Other work has focused on creating datasets of textual fact-checked claims, without building KGs.",
"Some of the larger ones include the Liar, Liar dataset of 12.8K claims from PolitiFact (Wang, 2017), and the MultiFC dataset of 38K claims from 26 fact-checking organizations (Augenstein et al., 2019), the 10K claims Truth of Various Shades (Rashkin et al., 2017) dataset, among several other datasets, which were used for automatic fact-checking of individual claims, not for checking whether an input claim was fact-checked previously.",
"Note that while the above work used manually normalized claims as input, we work with naturally occurring claims as they were made in political debates and speeches or in social media.",
"There has also been a lot of research on automatic fact-checking of claims and rumors, going in several different directions.",
"One research direction focuses on the social aspects of the claim and how users in social media react to it (Canini et al., 2011; Castillo et al., 2011; Ma et al., 2016; Gorrell et al., 2019; Ma et al., 2019).",
"Another direction mines the Web for information that proves or disproves the claim (Mukherjee and Weikum, 2015; Karadzhov et al., 2017; Popat et al., 2017; Baly et al., 2018b; Mihaylova et al., 2018; Nadeem et al., 2019).",
"In either case, it is important to model the reliability of the source as well as the stance of the claim with respect to other claims; in fact, it has been proposed that a claim can be fact-checked based on its source alone (Baly et al., 2018a) or based on its stance alone (Dungs et al., 2018).",
"A third direction performs fact-checking against Wikipedia (Thorne et al., 2018; Nie et al., 2019), or against a general collection of documents (Miranda et al., 2019).",
"A fourth direction uses a knowledge base or a knowledge graph (Ciampaglia et al., 2015; Shiadralkar et al., 2017; Gad-Elrab et al., 2019a,b; Huynh and Papotti, 2019).",
"Yet another direction performs fact-checking based on tables (Chen et al., 2019).",
"There is also recent work on using language models as knowledge bases (Petroni et al., 2019).",
"Ours is yet another research direction.",
"While our main contribution here is the new task and the new dataset, we should also mentioned some work on retrieving documents.",
"In our experiments, we perform retrieval using BM25 (Robert-son and Zaragoza, 2009) and re-ranking using BERT-based similarity, which is a common strategy in recent state-of-the-art retrieval models (Akkaly-oncu Yilmaz et al., 2019a; Nogueira and Cho, 2019; Akkalyoncu Yilmaz et al., 2019b).",
"Our approach is most similar to that of (Akka-lyoncu Yilmaz et al., 2019a), but we differ, as we perform matching, both with BM25 and with BERT, against the normalized claim, against the title, and against the full text of the articles in the fact-checking dataset; we also use both scores and reciprocal ranks when combining different scores and rankings.",
"Moreover, we use sentence-BERT instead of BERT.",
"Previous work has argued that BERT by itself does not yield good sentence representation.",
"Thus, approaches such as sentence-BERT (Reimers and Gurevych, 2019) have been proposed, which are specifically trained to produce good sentence-level representations.",
"This is achieved using Siamese BERT networks that are fine-tuned on NLI and STS-B data.",
"Indeed, in our experiments, we found sentence-BERT to perform much better than BERT.",
"The Universal Sentence Encoder (Cer et al., 2018) is another alternative, but sentence-BERT worked better in our experiments.",
"Finally, our task is related to semantic relatedness tasks, e.g., from the GLUE benchmark (Wang et al., 2018), such as natural language inference, or NLI task (Williams et al., 2018), recognizing textual entailment, or RTE (Bentivogli et al., 2009), paraphrase detection (Dolan and Brockett, 2005), and semantic textual similarity, or STS-B (Cer et al., 2017).",
"However, it also differs from them, as we will see in the following section.",
"We define the task as follows: Given a check-worthy input claim and a set of verified claims, rank those verified claims, so that the claims that can help verify the input claim, or a sub-claim in it, are ranked above any claim that is not helpful to verify the input claim.",
"Table 1 shows some examples of inputverified claim pairs, where the input claims are sentences from the 2016 US Presidential debates, and the verified claims are the corresponding fact-checked counter-parts in PolitiFact.",
"We can see on line 1 of Table 1 a trivial case, where the verified claim is identical to the input claim; however, such cases are not very frequent, as the experiments with the BM25 baseline in Section 7 below will show.",
"Lines 2 and 3 show harder cases, where the input claim and its manually annotated counter-part are quite different in their lexical choice, and yet the latter can serve to verify the former.",
"From the above examples, it is clear that ours is not a paraphrasing task, as illustrated by examples 25.",
"It is also not a natural language inference (NLI) or a recognizing textual entailment (RTE) task, as a claim can have sub-claims, which complicates entailment reasoning (as illustrated by examples 45).",
"Finally, the task goes beyond simple textual similarity, and thus it is not just an instance of semantic textual similarity (STS-B).",
"Note that we do not try to define formally what makes a verified claim a good match for an input claim.",
"Instead, we trust the manual annotations for this by fact-checking experts, which they perform when they comment on the claims made in political debates and speeches.",
"In many cases, the fact-checkers have explicitly indicated which previously fact-checked claim corresponds to a given original claim in a debate/speech.",
"A similar approach was adopted for a related task, e.g., it was used to obtain annotated training and testing data for the Check-Worthiness task of the CLEF Check-That!",
"Lab (Atanasova et al., 2018, 2019; Barron-Cedeno et al., 2020).",
"We created two datasets by collecting, for each of them, a set of verified claims and matching inputverified claims pairs (below, we will also refer to these pairs as Input-VerClaim pairs): the first dataset, PolitiFact, is about political debates and speeches and it is described in Section 4.1; the second dataset, Snopes, includes tweets, and it is described in Section 4.2.",
"PolitiFact is a fact-checking website that focuses on claims made by politicians, elected officials, and influential people in general.",
"PolitiFact fact-checks claims by assigning a truth value to them and publishing an article that gives background information and explains the assigned label.",
"This is similar to how other fact-checking websites operate.",
"VerClaim : the text of the claim, which is a normalized version of the original claim, as the human fact-checkers typically reformulate it, e.g., to make it clearer, context-independent, and self-contained; TruthValue : the label assigned to the claim; 11 Title : the title of the article on PolitiFact that discusses the claim;",
"Body : the body of the article.",
"11 We do not use the claim veracity labels in our experiments, but we collect them for possible future use.",
"Often, after a major political event, such as a political speech or a debate, PolitiFact publishes reports 12 that discuss the factuality of some of the claims made during that event.",
"Importantly for us, in these reports, some of the claims are linked to previously verified claims in PolitiFact.",
"Such pairs of an original claim and a previously verified claim form our ClaimVerClaim pairs.",
"We collected such overview reports for 78 public events in the period 20122019, from which we collected a total of 768 InputVerClaim pairs.",
"Given an Input claim, we refer to the corresponding verified claim in the pair as its matching VerClaim claim .",
"In general, there is a 1:1 correspondence, but in some cases an Input claim is mapped to multiple VerClaim claims in the database, and in other cases, multiple Input claims are matched to the same VerClaim claim.",
"Thus, the task in Section 3 reads as follows when instantiated to the PolitiFact dataset: given an Input claim, rank all 16,636 VerClaim claims, so that its matching VerClaim claims are ranked at the top.",
"Snopes is a website specialized in fact-checking myths, rumors, and urban legends.",
"We used information from it to create a second dataset, this time focusing on tweets.",
"We started with a typical article about a claim, and we looked inside the article for links to tweets that are possibly making that claim.",
"Note that some tweets mentioned in the article are not making the corresponding verified claim, and some are not making any claims; we manually checked and filtered out such tweets.",
"We collected 1,000 suitable tweets as Input claims, and we paired them with the corresponding claim that the page is about as the VerClaim claim.",
"We further extracted from the article its Title , and the TruthValue of the Input claim (a rating of the claims assigned from Snopes 13 ).",
"Examples of inputVerClaim pairs are shown in Table 2.",
"Comparing them to the ones from Table 1, we can observe that the Snopes tweets are generally more self-contained and context-independent.",
"Finally, we created a set of VerClaim claims to match against using the Snopes claims in the ClaimsKG dataset (Tchechmedjiev et al., 2019).",
"Ultimately, our Snopes dataset consists of 1,000 inputVerClaim pairs and 10,396 verified claims.",
"Statistics about the datasets are shown in Table 3; the datasets are available online.",
"8 4.3 Analysis In section 3, we discussed that matching some of the input claims with the corresponding verified claims can be a non-trivial task, and we gave examples of easy and hard cases.",
"To capture this distinction, we classify InputVerClaim pairs into two types.",
"Type-1 pairs are such for which the Input claim can be matched to the VerClaim using simple approximate string matching",
"techniques., e.g., as in line 1 of Table 1 and lines 1-2 of Table 2.",
"Conversely, Type-2 pairs are such for which the Input claim cannot be easily mapped to the VerClaim, e.g., as in lines 2-5 of Table 1 and line 3 of Table 2.",
"We manually annotated a sample of 100 pairs from PolitiFact inputVerClaimpairs and we found 48% of them to be of Type-2 .",
"13 http://www.snopes.com/ fact-check-ratings/ PolitiFact Snopes Input VerClaim pairs 768 1,000 training 614 800 testing 154 200 Total # of verified claims 16,636 10,396 Table 3: Statistics about the datasets: shown are the number of Input VerClaim pairs and the total number of VerClaim claims to match an Input claim against.",
"We further analyzed the complexity of matching an Input claim to the VerClaim from the same Input VerClaim pair using word-level TF.IDF-weighted cosine similarity.",
"Table 4 shows the number of pairs for which this similarity is above a threshold.",
"We can see that, for PolitiFact, only 27% of the pairs have a similarity score that is above 0.25, while for Snopes, this percentage is at 50%, which suggests Snopes should be easier than PolitiFact.",
"We treat the task as a ranking problem.",
"Thus, we use ranking evaluation measures, namely mean reciprocal rank (MRR), Mean Average Precision (MAP), and MAP truncated to rank k (MAP@ k ).",
"We also report HasPositive@ k , i.e., whether there is a true positive among the topk results.",
"Measures such as MAP@ k and HasPositive@ k for k { 1 , 3 , 5 } would be relevant in a scenario, where a journalist needs to verify claims in real time, in which case the system would return a short list of 3-5 claims that the journalist can quickly skim and make sure they are indeed a true match.",
"We further report MAP@ k and HasPositive@ k for k { 10 , 20 } as well as MAP (untruncated), which would be more suitable in a non-real-time scenario, where recall would be more important.",
"Here, we describe the models we experiment with.",
"A simple baseline is to use BM25 (Robertson and Zaragoza, 2009), which is classical approach in information retrieval.",
"BM25 assigns a score to each query-document pair based on exact matching between the words in the query and the words in a target document, and it uses this score for ranking.",
"We experiment with BM25 using the input claim as a query against different representations of the verified claims: IR (Title): the article titles; IR (VerClaim): the verified claims; IR (Body): the article bodies; Combinations of the above.",
"The BM25 algorithm focuses on exact matches, but as lines 25 in Table 1 and line 3 in Table 2 show, the input claim can use quite different words.",
"Thus, we further try semantic matching using BERT.",
"Initially, we tried to fine-tune BERT (Devlin et al., 2019), but this did not work well, probably because we did not have enough data to perform the fine-tuning.",
"Thus, eventually we opted to use BERT (and variations thereof) as a sentence encoder, and to perform max-pooling on the penultimate layer to obtain a representation for an input piece of text.",
"Then, we calculate the cosine similarity between the representation of the input claim and of the verified claims in the dataset, and we use this similarity for ranking.",
"BERT:base,uncased : the base, uncased model of BERT; RoBERTa:base : the base, cased model of RoBERTa (Liu et al., 2019); sentence-BERT:base : BERT, specifically trained to produce good sentence representations (Reimers and Gurevych, 2019); this is unlike BERT and RoBERTa, for which we found the cosine similarity between totally unrelated claims often to be quite high; sentence-BERT:large : the large version of sentence-BERT.",
"BERT on full articles: We further extend the above models to match against the body of the document, borrowing and further developing an idea from (Yang et al., 2019).",
"We use sentence-BERT to encode each sentence in the Body , and then we compute the cosine similarity between the input claim and each of those sentences.",
"Next, we collect scores for each claim-document pair, as opposed to having only a single score representing the similarity between the input and a verified claim.",
"These scores include the cosine similarity for ( i ) claim vs. VerClaim , ( ii ) claim vs. Title , and ( iii ) topn scores of the claim vs. Body sentences.",
"Finally, we train a binary classifier that takes all these scores and predicts whether the claim-document pair is a good match.",
"Since BM25 and BERT capture different types of information, they can be combined to create a set of features based on the rankings returned by BM25 and the similarity scores computed on the embedding of the claim pairs.",
"Following (Nogueira et al., 2019), we use a reranking algorithm, namely rankSVM with an RBF kernel, which learns to rank using a pairwise loss.",
"Below we describe our experiments on the PolitiFact and the Snopes datasets.",
"We start with IR-based models, followed by BERT-based semantic similarity on claims and articles, and finally we experiment with pairwise learning-to-rank models.",
"For the PolitFact dataset, we perform experiments with all models from Section 6, and we report the results in Table 5.",
"We ran experiments matching the Input against Title , VerClaim , Body and Title+VerClaim+Body .",
"We can see in Table 5 that using the Title yields the lowest results by a large margin.",
"This is because the Title is only a summary, while VerClaim and Body contain more details and context.",
"We can further see that the best representation, on all measures, is to use the Body , which performs better than using VerClaim by 0.12-0.14 in terms of MAP@ k and MAP, and by 0.09 on MRR.",
"This is probably because the article body is longer, which increases the probability of having more words matching the input claim.",
"Finally, matching against all three targets is slightly worse than using Body only.",
"Next, we experimented with cosine similarity between the Input claim and VerClaim , as the BM25 experiments above have shown that using VerClaim is better than using Title .",
"We can see in Table 5 that BERT:uncased is better than RoBERTa (which is case sensitive) on all measures, which suggests that casing might not matter.",
"We further see that the best semantic model is sentence-BERT: both the base and the large variants of sentence-BERT beat BERT and RoBERTa by at least 13% absolute across all measures (and in some cases, by a much larger margin).",
"Next, we performed full article experiments, where we used the large model of sentence-BERT, as it outperformed the rest of the BERT models shown in Table 5.",
"We extracted similarity scores for each claim-document pair using sentence-BERT:large.",
"We then trained a simple neural network (20-relu-10-relu) for classification.",
"We trained the model for 15 epochs with a batch size of 2,048 using the Adam optimizer with a learning rate of 1e-3.",
"We further used class weighting because the data was severely imbalanced: there were 614 positive exampled out of 10M claim-document pairs, as we paired each of the 614 input claims with each of the 16,636 verified claims in the database.",
"We ran the experiment for various numbers of topn cosine scores obtained from the Body , as we wanted to investigate the relationship between the model performance and the information it uses.",
"In the BERT on Full Articles section in Table 5, we can see that using the scores for the top-4 best-matching sentences from the article body, together with scores for VerClaim and for the article title, yielded the best performance.",
"Moreover, the results got closer to those for BM25, even though overall they still lag a bit behind.",
"Finally, we trained a pairwise RankSVM model to re-rank the topN results retrieved using IR:Body .",
"For each claim-document pair in the topN list, we collected the scores for IR:Title , IR:VerClaim , IR:Body , as well as from sentence-BERT:large for n = 4 with their corresponding reciprocal ranks for the rankings they induce.",
"As described in Section 6.3, using both methods yields better predictions as this combines exact matching and semantic similarities.",
"We can see in Table 5 that the re-ranker yielded consistent and sizable improvement over the models from the previous experiments, by 0.04-0.05 points absolute across the different measures, which is remarkable as it is well-known from the literature that BM25 is a very strong baseline for IR tasks.",
"This is because our reranker is able to use both exact and semantic matching to target the different kinds of pairs that are found in the dataset.",
"We also notice that the performance of the re-ranker improves as we increase the length of the list that is being re-ranked until a length of 100, and it starts degrading after that.",
"On the Snopes dataset, we performed experiments analogous to those for the PolitiFact dataset, but with some differences, the most important being that this time we did not perform matching against the article body as the tweets that serve as input claims in our Snopes dataset were extracted from the article body.",
"Note that this was not an issue for the PolitiFact dataset, as the input claim in a debate/speech required a lot of normalization and could not be found in the article body verbatim.",
"Table 6 reports the evaluation results.",
"We ran three experiments using BM25 to match the Input against Title , VerClaim , and Title+VerClaim",
"We can see in Table 6 that, just like for PolitiFact, using VerClaim performed better than using the article title, which is true for all evaluation measures; however, this time the margin was much smaller than it was for PolitiFact.",
"We further noticed a small improvement for all MAP@ k measures when matching against both the article Title and the VerClaim .",
"Overall, BM25 is a very strong baseline for Snopes due to the high word overlap between the input claims and the verified claims (also, compared to PolitiFact, as we have seen in Table 4 above).",
"Based on the lessons learned from PolitiFact, for semantic matching, we only experimented with sentence-BERT.",
"We can see in Table 6 that this yielded results that were lower than for BM25 by a margin of at least 0.10 absolute for almost every reported measure; yet, this margin is smaller than for PolitiFact.",
"For these experiments, once again matching against the verified claim outperformed matching against the article title by a sizable margin.",
"As mentioned above, we did not perform matching of the input tweet against the article body, as this would easily give away the answer: the tweet can be found verbatim inside the target article.",
"For the purpose of comparison, we tried to filter out the text of the input tweet from the text of the article body before attempting the matching, but we still got unrealistically high results.",
"Thus, ultimately we decided to abandon these experiments.",
"Finally, we trained a pairwise RankSVM model to re-rank the topN results from IR:VerClaim+Title",
"For each claim-document pair in the topN list, we extracted the scores from IR:Title , IR:VerClaim , IR:VerClaim+Title , sentence-BERT:large:Title , and sentence-BERT:large:VerClaim , as well as the corresponding reciprocal ranks for all target documents according to each of these scores.",
"This is the same as for PolitiFact, except that now we do not use scores for matching the input to a document body.",
"We can see in Table 6 that the best re-ranking model yielded sizable improvements over the best individual model by 0.09-0.18 points absolute on all evaluation measures.",
"Comparing the best re-ranking models for Snopes and PolitiFact, we can see that Snopes performed best when using a top-50 list, compared to top-100 for PolitiFact.",
"We believe that this is due to the difference in performance of the retrieval models used to extract the topN pairs: for Snopes, IR:VerClaim+Title has an MMR score of 0.664, while the best PolitiFact model, IR:Body , has an MRR score of 0.565.",
"Thus, for Snopes we rerank an N -best list extracted by a stronger IR model, and thus there is no need to go that deep in the list.",
"We have argued for the need to address detecting previously fact-checked claims as a task of its own right, which could be an integral part of automatic fact-checking, or a tool to help human fact-checkers or journalists.",
"We have created specialized datasets, which we have released, together with our code, to the research community in order to enable further research.",
"Finally, we have presented learning-to-rank experiments, demonstrating sizable improvements over state-of-the-art retrieval and textual similarity approaches.",
"In future work, we plan to extend this work to more datasets and to more languages.",
"We further want to go beyond textual claims, and to take claim-image and claim-video pairs as an input.",
"This research is part of the Tanbih project, 14 which aims to limit the effect of fake news, disinformation, propaganda, and media bias by making users aware of what they are reading."
] | [
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"other"
] |
[
"The success of a text simplification system heavily depends on the quality and quantity of complex-simple sentence pairs in the training corpus, which are extracted by aligning sentences between parallel articles.",
"To evaluate and improve sentence alignment quality, we create two manually annotated sentence-aligned datasets from two commonly used text simplification corpora, Newsela and Wikipedia.",
"We propose a novel neural CRF alignment model which not only leverages the sequential nature of sentences in parallel documents but also utilizes a neural sentence pair model to capture semantic similarity.",
"Experiments demonstrate that our proposed approach outperforms all the previous work on monolingual sentence alignment task by more than 5 points in F1.",
"We apply our CRF aligner to construct two new text simplification datasets, NEWSELA-AUTO and WIKI-AUTO , which are much larger and of better quality compared to the existing datasets.",
"A Transformer-based seq2seq model trained on our datasets establishes a new state-of-the-art for text simplification in both automatic and human evaluation.",
"1 1 Introduction Text simplification aims to rewrite complex text into simpler language while retaining its original meaning (Saggion, 2017).",
"Text simplification can provide reading assistance for children (Kajiwara et al., 2013), non-native speakers (Petersen and Ostendorf, 2007; Pellow and Eskenazi, 2014), nonexpert readers (Elhadad and Sutaria, 2007; Siddharthan and Katsos, 2010), and people with language disorders (Rello et al., 2013).",
"As a preprocessing step, text simplification can also improve 1 Code and data are available at: https://github.",
"the performance of many natural language processing (NLP) tasks, such as parsing (Chandrasekar et al., 1996), semantic role labelling (Vickrey and Koller, 2008), information extraction (Miwa et al., 2010) , summarization (Vanderwende et al., 2007; Xu and Grishman, 2009), and machine translation (Chen et al., 2012; Stajner and Popovic, 2016).",
"Automatic text simplification is primarily addressed by sequence-to-sequence (seq2seq) models whose success largely depends on the quality and quantity of the training corpus, which consists of pairs of complex-simple sentences.",
"Two widely used corpora, NEWSELA (Xu et al., 2015) and WIKILARGE (Zhang and Lapata, 2017), were created by automatically aligning sentences between comparable articles.",
"However, due to the lack of reliable annotated data, 2 sentence pairs are often aligned using surface-level similarity metrics, such as Jaccard coefficient (Xu et al., 2015) or cosine distance of TF-IDF vectors (Paetzold et al., 2017), which fails to capture paraphrases and the context of surrounding sentences.",
"A common drawback of text simplification models trained on such datasets is that they behave conservatively, performing mostly deletion, and rarely paraphrase (Alva-Manchego et al., 2017).",
"Moreover, WIKILARGE is the concatenation of three early datasets (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011) that are extracted from Wikipedia dumps and are known to contain many errors (Xu et al., 2015).",
"To address these problems, we create the first high-quality manually annotated sentence-aligned datasets: NEWSELA-MANUAL with 50 article sets, and WIKI-MANUAL with 500 article pairs.",
"We design a novel neural CRF alignment model, which utilizes fine-tuned BERT to measure semantic similarity and leverages the similar order of content be-2 Hwang et al. (2015) annotated 46 article pairs from Simple-Normal Wikipedia corpus; however, its annotation is noisy, and it contains many sentence splitting errors.",
"tween parallel documents, combined with an effective paragraph alignment algorithm.",
"Experiments show that our proposed method outperforms all the previous monolingual sentence alignment approaches (Stajner et al., 2018; Paetzold et al., 2017; Xu et al., 2015) by more than 5 points in F1.",
"By applying our alignment model to all the 1,882 article sets in Newsela and 138,095 article pairs in Wikipedia dump, we then construct two new simplification datasets, NEWSELA-AUTO (666,645 sentence pairs) and WIKI-AUTO (488,332 sentence pairs).",
"Our new datasets with improved quantity and quality facilitate the training of complex seq2seq models.",
"A BERT-initialized Transformer model trained on our datasets outperforms the state-of-the-art by 3.4% in terms of SARI, the main automatic metric for text simplification.",
"Our simplification model produces 25% more rephrasing than those trained on the existing datasets.",
"Our contributions include: 1. Two manually annotated datasets that enable the first systematic study for training and evaluating monolingual sentence alignment; 2. A neural CRF sentence alinger and a paragraph alignment algorithm that employ fine-tuned BERT to capture semantic similarity and take advantage of the sequential nature of parallel documents; 3. Two automatically constructed text simplification datasets which are of higher quality and 4.7 and 1.6 times larger than the existing datasets in their respective domains; 4. A BERT-initialized Transformer model for automatic text simplification, trained on our datasets, which establishes a new state-of-the-art in both automatic and human evaluation.",
"We propose a neural CRF sentence alignment model, which leverages the similar order of content presented in parallel documents and captures editing operations across multiple sentences, such as splitting and elaboration (see Figure 1 for an example).",
"To further improve the accuracy, we first align paragraphs based on semantic similarity and vicinity information, and then extract sentence pairs from these aligned paragraphs.",
"In this section, we describe the task setup and our approach.",
"Given a simple article (or paragraph) S of m sentences and a complex article (or paragraph) C of n sentences, for each sentence s i ( i [1 , m ] ) in the simple article, we aim to find its corresponding sentence c a i ( a i [0 , n ] ) in the complex article.",
"We use a i to denote the index of the aligned sentence, where a i = 0 indicates that sentence s i is not aligned to any sentence in the complex article.",
"The full alignment a between article (or paragraph) pair S and C can then be represented by a sequence of alignment labels a = ( a 1 , a 2 , . . . , a m ) .",
"Figure 1 shows an example of alignment labels.",
"One spe-cific aspect of our CRF model is that it uses a varied number of labels for each article (or paragraph) pair rather than a fixed set of labels.",
"We learn P ( a | S, C ) , the conditional probability of alignment a given an article pair ( S, C ) , using",
"P ( a | S, C ) = exp(( a , S, C )) (cid:80) a A exp(( a , S, C )) = exp( (cid:80) | S | i =1 ( a i , a i 1 , S, C )) (cid:80) a A exp( (cid:80) | S | i =1 ( a i , a i 1 , S, C (1)",
"where | S | = m denotes the number of sentences in article S .",
"The score (cid:80) | S | i =1 ( a i , a i 1 , S, C ) sums over the sequence of alignment labels a = ( a 1 , a 2 , . . . , a m ) between the simple article S and the complex article C , and could be decomposed into two factors as follows: ( a i , a i 1 , S, C ) = sim ( s i , c a i ) + T ( a i , a i 1 ) (2) where sim ( s i , c a i ) is the semantic similarity score between the two sentences, and T ( a i , a i 1 ) is a pairwise score for alignment label transition that a i follows a i 1 .",
"Semantic Similarity A fundamental problem in sentence alignment is to measure the semantic similarity between two sentences s i and c j .",
"Prior work used lexical similarity measures, such as Jaccard similarity (Xu et al., 2015), TF-IDF (Paetzold et al., 2017), and continuous n-gram features (Stajner et al., 2018).",
"In this paper, we fine-tune BERT (De-vlin et al., 2019) on our manually labeled dataset (details in 3) to capture semantic similarity.",
"Alignment Label Transition In parallel documents, the contents of the articles are often presented in a similar order.",
"The complex sentence c a i that is aligned to s i , is often related to the complex sentences c a i 1 and c a i +1 , which are aligned to s i 1 and s i +1 , respectively.",
"To incorporate this intuition, we propose a scoring function to model the transition between alignment labels using the following features: g 1 = | a i a i 1 | g 2 = 1 ( a i = 0 , a i 1 (cid:54) = 0) g 3 = 1 ( a i (cid:54) = 0 , a i 1 = 0) g 4 = 1 ( a i = 0 , a i 1 = 0) (3) where g 1 is the absolute distance between a i and a i 1 , g 2 and g 3 denote if the current or prior sentence is not aligned to any sentence, and g 4 indicates whether both s i and s i 1 are not aligned to any sentences.",
"where [ , ] represents concatenation operation and FFNN is a 2-layer feedforward neural network.",
"We provide more implementation details of the model in Appendix A.1.",
"During inference, we find the optimal alignment a :",
"using Viterbi algorithm in O ( mn 2 ) time.",
"During training, we maximize the conditional probability of the gold alignment label a : log P ( a | S, C ) =( a , S, C ) log (cid:88) a A exp(( a , S, C )) (6) The second term sums the scores of all possible alignments and can be computed using forward algorithm in O ( mn 2 ) time as well.",
"Both accuracy and computing efficiency can be improved if we align paragraphs before aligning sentences.",
"In fact, our empirical analysis revealed that sentence-level alignments mostly reside within the corresponding aligned paragraphs (details in 4.4 and Table 3).",
"Moreover, aligning paragraphs first provides more training instances and reduces the label space for our neural CRF model.",
"We propose Algorithm 1 and 2 for paragraph alignment.",
"Given a simple article S with k paragraphs S = ( S 1 , S 2 , . . . , S k ) and a complex article C with l paragraphs C = ( C 1 , C 2 , . . . , C l ) , we first apply Algorithm 1 to calculate the semantic similarity matrix simP between paragraphs by averaging or maximizing over the sentence-level similarities ( 2.2).",
"Then, we use Algorithm 2 to generate the paragraph alignment matrix alignP .",
"We align paragraph pairs if they satisfy one of the two conditions:",
"(a) having high semantic similarity and appearing in similar positions in the article pair (e.g., both at the beginning), or",
"(b) two continuous paragraphs in the complex article having relatively high semantic similarity with one paragraph in the simple side, (e.g., paragraph splitting or fusion).",
"The difference of relative position in documents Algorithm 1: Pairwise Paragraph Similarity Initialize: simP R 2 k l to 0 2 k l for i 1 to k do for j 1 to l do simP [1 ,i,j ] = avg s p S i (cid:16) max c q C j simSent ( s p ,c q ) (cid:17) simP [2 ,i,j ] = max s p S i ,c q C j simSent ( s p ,c q ) end endreturn simP Algorithm 2: Paragraph Alignment Algorithm Input: simP R 2 k l Initialize: alignP I k l to 0 k l for i 1 to k do j max = argmax j simP [1 , i, j ] if simP [1 , i, j max ] > 1 and d ( i, j max ) < 2 then alignP [ i, j max ] = 1 endfor j 1 to l do if simP [2 , i, j ] > 3 then alignP [ i, j ] = 1 endif j > 1 & simP [2 , i, j ] > 4 & simP [2 , i, j 1] > 4 & d ( i, j ) < 5 & d ( i, j 1) < 5 then alignP [ i, j ] = 1 alignP [ i, j 1] = 1 end end endreturn alignP is defined as d ( i, j ) = | ik jl | , and the thresholds 1 5 in Algorithm 2 are selected using the dev set.",
"Finally, we merge the neighbouring paragraphs which are aligned to the same paragraph in the simple article before feeding them into our neural CRF aligner.",
"We provide more details in Appendix A.1.",
"To address the lack of reliable sentence alignment for Newsela (Xu et al., 2015) and Wikipedia (Zhu et al., 2010; Woodsend and Lapata, 2011), we designed an efficient annotation methodology to first manually align sentences between a few complex and simple article pairs.",
"Then, we automatically aligned the rest using our alignment model trained on the human annotated data.",
"We created two sentence-aligned parallel corpora (details in 5), which are the largest to date for text simplification.",
"Newsela corpus (Xu et al., 2015) consists of 1,932 English news articles where each article (level 0) is",
"re-written by professional editors into four simpler versions at different readability levels (level 1-4).",
"We annotate sentence alignments for article pairs at adjacent readability levels (e.g., 0-1, 1-2) as the alignments between non-adjacent levels (e.g., 0-2) can be then derived automatically.",
"To ensure efficiency and quality, we designed the following three-step annotation procedure: 1. Align paragraphs using CATS toolkit (Stajner et al., 2018), and then correct the automatic paragraph alignment errors by two in-house annotators.",
"3 Performing paragraph alignment as the first step significantly reduces the number of sentence pairs to be annotated from every possible sentence pair to the ones within the aligned paragraphs.",
"We design an efficient visualization toolkit for this step, for which a screenshot can be found in Appendix E.2.",
"2. For each sentence pair within the aligned paragraphs, we ask five annotators on the Figure 3 We consider any sentence pair not in the aligned paragraph pairs as not-aligned .",
"This assumption leads to a small number of missing sentence alignments, which are manually corrected in Step 3. Figure 2: Manual inspection of 100 random sentence pairs from our corpora (NEWSELA-AUTO and WIKIAUTO ) and the existing Newsela (Xu et al., 2015) and Wikipedia (Zhang and Lapata, 2017) corpora.",
"Eight 4 crowdsourcing platform to classify into one of the three categories: aligned , partially-aligned , or not-aligned .",
"We provide the annotation instructions and interface in Appendix E.1.",
"We require annotators to spend at least ten seconds per question and embed one test question in every five questions.",
"Any worker whose accuracy drops below 85% on test questions is removed.",
"The inter-annotator agreement is 0.807 measured by Cohen's kappa (Artstein and Poesio, 2008).",
"3. We have four in-house annotators (not authors) verify the crowdsourced labels.",
"We manually aligned 50 article groups to create the NEWSELA-MANUAL dataset with a 35/5/10 split for train/dev/test, respectively.",
"We trained our aligner on this dataset (details in 4), then automatically aligned sentences in the remaining 1,882 article groups in Newsela (Table 1) to create a new sentence-aligned dataset, NEWSELA-AUTO , which consists of 666k sentence pairs predicted as aligned and partially-aligned .",
"NEWSELA-AUTO is considerably larger than the previous NEWSELA (Xu et al., 2015) dataset of 141,582 pairs, and contains 44% more interesting rewrites (i.e., rephrasing and splitting cases) as shown in Figure 2. 4",
"We also create a new version of Wikipedia corpus by aligning sentences between English Wikipedia and Simple English Wikipedia.",
"Previous work (Xu et al., 2015) has shown that Wikipedia is much noisier than the Newsela corpus.",
"We provide this dataset in addition to facilitate future research.",
"We first extract article pairs from English and Simple English Wikipedia by leveraging Wikidata, a well-maintained database that indexes named entities (and events etc.) and their Wikipedia pages in different languages.",
"We found this method to be more reliable than using page titles (Coster and Kauchak, 2011) or cross-lingual links (Zhu et al., 2010; Woodsend and Lapata, 2011), as titles can be ambiguous and cross-lingual links may direct to a disambiguation or mismatched page (more details in Appendix B).",
"In total, we extracted 138,095 article pairs from the 2019/09 Wikipedia dump, which is two times larger than the previous datasets (Coster and Kauchak, 2011; Zhu et al., 2010) of only 60 65k article pairs, using an improved version of the WikiExtractor library.",
"5 Then, we crowdsourced the sentence alignment annotations for 500 randomly sampled document pairs (10,123 sentence pairs total).",
"As document length in English and Simple English Wikipedia articles vary greatly, 6 we designed the following annotation strategy that is slightly different from Newsela.",
"For each sentence in the simple article, we select the sentences with the highest similarity scores from the complex article for manual annotation, based on four similarity measures: lexical similarity from CATS (Stajner et al., 2018), cosine similarity using TF-IDF (Paetzold et al., 2017), cosine similarity between BERT sentence embeddings, and alignment probability by a BERT model fine-tuned on our NEWSELA-MANUAL data ( 3.1).",
"As these four metrics may rank the same sentence at the top, on an average, we collected 2.13 complex sentences for every simple sentence and annotated the alignment label for each sentence pair.",
"Our pilot study showed that this method captured 93.6% of the aligned sentence pairs.",
"We named this manually labeled dataset WIKI-MANUAL with a train/dev/test split of 350/50/100 article pairs.",
"Finally, we trained our alignment model on this 5 https://github.com/attardi/wikiextractor 6 The average number of sentences in an article is 9.2 16.5 for Simple English Wikipedia and 74.8 94.4 for English Wikipedia.",
"annotated dataset to automatically align sentences for all the 138,095 document pairs (details in Appendix B).",
"In total, we yielded 604k non-identical aligned and partially-aligned sentence pairs to create the WIKI-AUTO dataset.",
"Figure 2 illustrates that WIKI-AUTO contains 75% less defective sentence pairs than the old WIKILARGE (Zhang and Lapata, 2017) dataset.",
"In this section, we present experiments that compare our neural sentence alignment against the state-of-the-art approaches on NEWSELA-MANUAL ( 3.1) and WIKI-MANUAL ( 3.2) datasets.",
"We compare our neural CRF aligner with the following baselines and state-of-the-art approaches:",
"1. Three similarity-based methods: Jaccard similarity (Xu et al., 2015), TF-IDF cosine similarity (Paetzold et al., 2017) and a logistic regression classifier trained on our data with lexical features from Stajner et al. (2018).",
"2. JaccardAlign (Xu et al., 2015), which uses Jaccard coefficient for sentence similarity and a greedy approach for alignment.",
"3. MASSAlign (Paetzold et al., 2017), which combines TF-IDF cosine similarity with a vicinity-driven dynamic programming algorithm for alignment.",
"4. CATS toolkit (Stajner et al., 2018), which uses character n-gram features for sentence similarity and a greedy alignment algorithm.",
"We report Precision , Recall and F1 on two binary classification tasks: aligned + partially-aligned vs. not-aligned ( Task 1 ) and aligned vs. partially-aligned + not-aligned ( Task 2 ).",
"It should be noted that we excluded identical sentence pairs in the evaluation as they are trivial to classify.",
"Table 2 shows the results on NEWSELA-MANUAL test set.",
"For similarity-based methods, we choose a threshold based on the maximum F1 on the dev set.",
"Our neural CRF aligner outperforms the state-of-the-art approaches by more than 5 points in F1.",
"In particular, our method performs better than the previous work on partial alignments, which contain many interesting simplification operations, such as sentence splitting and paraphrasing with deletion.",
"Similarly, our CRF alignment model achieves 85.1 F1 for Task 1 ( aligned + partially-aligned vs. not-aligned ) on the WIKI-MANUAL test set.",
"It outperforms one of the previous SOTA approaches CATS (Stajner et al., 2018) by 15.1 points in F1.",
"We provide more details in Appendix C. 4.4 Ablation Study We analyze the design choices crucial for the good performance of our alignment model, namely CRF component, the paragraph alignment and the BERT-based semantic similarity measure.",
"Table 3 shows the importance of each component with a series of ablation experiments on the dev set.",
"CRF Model Our aligner achieves 93.2 F1 and 88.1 F1 on Task 1 and 2, respectively, which is around 3 points higher than its variant without the CRF component (BERT finetune + ParaAlign).",
"Modeling alignment label transitions and sequential predictions helps our neural CRF aligner to handle sentence splitting cases better, especially when sentences undergo dramatic rewriting.",
"Paragraph Alignment Adding paragraph alignment (BERT finetune + ParaAlign) improves the precision on Task 1 from 93.3 to 98.4 with a negligible decrease in recall when compared to not aligning paragraphs (BERT finetune ).",
"Moreover, paragraph alignments generated by our algorithm (Our Aligner) perform close to the gold alignments (Our Aligner + gold ParaAlign) with only 0.9 and 0.3 difference in F1 on Task 1 and 2, respectively.",
"Semantic Similarity BERT finetune performs better than other neural models, including Infersent (Conneau et al., 2017), ESIM (Chen et al., 2017), BERTScore (Zhang et al., 2020) and pre-trained BERT embedding (Devlin et al., 2019).",
"For BERTScore, we use idf weighting, and treat simple sentence as reference.",
"In this section, we compare different automatic text simplification models trained on our new parallel corpora, NEWSELA-AUTO and WIKI-AUTO , with their counterparts trained on the existing datasets.",
"We establish a new state-of-the-art for sentence simplification by training a Transformer model with initialization from pre-trained BERT checkpoints.",
"Existing datasets of complex-simple sentences, NEWSELA (Xu et al., 2015) and WIKILARGE (Zhang and Lapata, 2017), were aligned using lexical similarity metrics.",
"NEWSELA dataset (Xu et al., 2015) was aligned using JaccardAlign ( 4.1).",
"WIKILARGE is a concatenation of three early datasets (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011) where sentences in Sim-ple/Normal English Wikipedia and editing history were aligned by TF-IDF cosine similarity.",
"For our new NEWSELA-AUTO , we partitioned the article sets such that there is no overlap between the new train set and the old test set, and vice-versa.",
"Following Zhang and Lapata (2017), we also excluded sentence pairs corresponding to the levels 01, 12 and 23.",
"For our WIKIAUTO dataset, we eliminated sentence pairs with high ( > 0.9) or low ( < 0.1) lexical overlap based on BLEU scores (Papineni et al., 2002), following Stajner et al. (2015).",
"We observed that sentence pairs with low BLEU are often inaccurate paraphrases with only shared named entities and the pairs with high BLEU are dominated by sentences merely copied without simplification.",
"We used the benchmark TURK corpus (Xu et al., 2016) for evaluation on Wikipedia, which consists of 8 human-written references for sentences in the validation and test sets.",
"We discarded sentences in TURK corpus from WIKI-AUTO .",
"Table 4 shows the statistics of the existing and our new datasets.",
"We compare the following seq2seq models trained using our new datasets versus the existing datasets:",
"1. A BERT-initialized Transformer , where the encoder and decoder follow the BERT base architecture.",
"The encoder is initialized with the same checkpoint and the decoder is randomly initialized (Rothe et al., 2020).",
"2. A randomly initialized Transformer with the same BERT base architecture as above.",
"3. A BiLSTM-based encoder-decoder model used in Zhang and Lapata (2017).",
"4. EditNTS (Dong et al., 2019), 7 a state-of-the-art neural programmer-interpreter (Reed and de Freitas, 2016) approach that predicts explicit edit operations sequentially.",
"In addition, we compared our BERT-initialized Transformer model with the released system outputs from Kriz et al. (2019) and EditNTS (Dong et al., 2019).",
"We implemented our LSTM and Transformer models using Fairseq.",
"8 We provide the model and training details in Appendix D.1.",
"7 https://github.com/yuedongP/EditNTS 8 https://github.com/pytorch/fairseq Evaluation on our new test set Evaluation on old test set SARI add keep del FK Len SARI add keep del FK Len Complex (input) 11.9 0.0 35.5 0.0 12 24.3 12.5 0.0 37.7 0.0 11 22.9 Models trained on old dataset (original NEWSELA corpus released in (Xu et al., 2015)) Transformer rand 33.1 1.8 22.1 75.4 6.8 14.2 34.1 2.0 25.5 74.8 6.7 14.2 LSTM 35.6 2.8 32.1 72.0 8.2 16.9 36.2 2.5 34.9 71.3 7.7 16.3 EditNTS 35.5 1.8 30.0 75.4 7.1 14.1 36.1 1.7 32.8 73.8 7.0 14.1 Transformer bert 34.4 2.4 25.2 75.8 7.0 14.5 35.1 2.7 27.8 74.8 6.8 14.3 Models trained on our new dataset (NEWSELA-AUTO ) Transformer rand 35.6 3.2 28.4 75.0 7.1 14.4 35.2 2.5 29.7 73.5 7.0 14.2 LSTM 35.8 3.9 30.5 73.1 7.0 14.3 36.4 3.3 33.0 72.9 6.6 14.0 EditNTS 35.8 2.4 29.4 75.6 6.3 11.6 35.7 1.8 31.1 74.2 6.1 11.5 Transformer bert 36.6 4.5 31.0 74.3 6.8 13.3 36.8 3.8 33.1 73.4 6.8 13.5 Simple (reference) 6.6 13.2 6.2 12.6 Table 5: Automatic evaluation results on NEWSELA test sets comparing models trained on our dataset NEWSELAAUTO against the existing dataset (Xu et al., 2015).",
"In this section, we evaluate different simplification models trained on our new datasets versus on the old existing datasets using both automatic and human evaluation.",
"We report SARI (Xu et al., 2016), Flesch-Kincaid ( FK ) grade level readability (Kincaid and Chissom, 1975), and average sentence length ( Len ).",
"While SARI compares the generated sentence to a set of reference sentences in terms of correctly inserted, kept and deleted n-grams ( n { 1 , 2 , 3 , 4 } ) , FK measures the readability of the generated sentence.",
"We also report the three rewrite operation scores used in SARI: the precision of delete ( del ), the F1-scores of add ( add ), and keep ( keep ) operations.",
"Wikipedia datasets respectively.",
"Systems trained on our datasets outperform their equivalents trained on the existing datasets according to SARI.",
"The difference is notable for Transformer bert with a 6.4% and 3.7% increase in SARI on NEWSELA-AUTO test set and TURK corpus, respectively.",
"Larger size and improved quality of our datasets enable the training of complex Transformer models.",
"In fact, Transformer bert trained on our new datasets outperforms the existing state-of-the-art systems for automatic text simplification.",
"Although improvement in SARI is modest for LSTM-based models (LSTM and EditNTS), the increase in F1 scores for addition and deletion operations indicate that the models trained on our datasets make more meaningful changes to the input sentence.",
"We also performed human evaluation by asking five Amazon Mechanical Turk workers to rate fluency, adequacy and simplicity (detailed instructions in Appendix D.2) of 100 random sentences generated by different simplification models trained on NEWSELA-AUTO and the existing dataset.",
"Each SARI add keep del FK Len Complex (input) 25.9 0.0 77.8 0.0 13.6 22.4 Models trained on old dataset (WIKILARGE ) LSTM 33.8 2.5 65.6 33.4 11.6 20.6 Transformer rand 33.5 3.2 64.1 33.2 11.1 17.7 EditNTS 35.3 3.0 63.9 38.9 11.1 18.5 Transformer bert 35.3 4.4 66.0 35.6 10.9 17.9 Models trained on our new dataset (WIKI-AUTO ) LSTM 34.0 2.8 64.0 35.2 11.0 19.3 Transformer rand 34.7 3.3 68.8 31.9 11.7 18.7 EditNTS 36.4 3.6 66.1 39.5 11.6 20.2 Transformer bert 36.6 5.0 67.6 37.2 11.4 18.7 Simple (reference) 11.7 20.2 Table 8: Automatic evaluation results on Wikipedia TURK corpus comparing models trained on WIKIAUTO and WIKILARGE (Zhang and Lapata, 2017).",
"worker evaluated these aspects on a 5-point Likert scale.",
"We averaged the ratings from five workers.",
"Table 7 demonstrates that Transformer bert trained on NEWSELA-AUTO greatly outperforms the one trained on the old dataset.",
"Even with shorter sentence outputs, our Transformer bert retained similar adequacy as the LSTM-based models.",
"Our Transformer bert model also achieves better fluency, adequacy, and overall ratings compared to the SOTA systems (Table 6).",
"We provide examples of system outputs in Appendix D.3.",
"Our manual inspection (Figure 3) also shows that Transfomer bert trained on NEWSELA-AUTO performs 25% more paraphrasing and deletions than its variant trained on the previous NEWSELA (Xu et al., 2015) dataset.",
"Text simplification is considered as a text-to-text generation task where the system learns how to simplify from complex-simple sentence pairs.",
"There is a long line of research using methods based on hand-crafted rules (Siddharthan, 2006; Niklaus et al., 2019), statistical machine translation (Narayan and Gardent, 2014; Xu et al., 2016; Wubben et al., 2012), or neural seq2seq models (Zhang and Lapata, 2017; Zhao et al., 2018; Nisioi et al., 2017).",
"As the existing datasets were built using lexical similarity metrics, they frequently omit paraphrases and sentence splits.",
"While training on such datasets creates conservative systems that rarely paraphrase, evaluation on these datasets exhibits an unfair preference for deletion-based simplification over paraphrasing.",
"Sentence alignment has been widely used to extract complex-simple sentence pairs from parallel articles for training text simplification systems.",
"Previous work used surface-level similarity metrics, such as TF-IDF cosine similarity (Zhu et al., 2010; Woodsend and Lapata, 2011; Coster and Kauchak, 2011; Paetzold et al., 2017), Jaccard-similarity (Xu et al., 2015), and other lexical features (Hwang et al., 2015; Stajner et al., 2018).",
"Then, a greedy (Stajner et al., 2018) or dynamic programming (Barzilay and Elhadad, 2003; Paetzold et al., 2017) algorithm was used to search for the optimal alignment.",
"Another related line of research (Smith et al., 2010; Tufis , et al., 2013; Tsai and Roth, 2016; Gottschalk and Demidova, 2017; Aghaebrahimian, 2018; Thompson and Koehn, 2019) aligns parallel sentences in bilingual corpora for machine translation.",
"In this paper, we proposed a novel neural CRF model for sentence alignment, which substantially outperformed the existing approaches.",
"We created two high-quality manually annotated datasets (NEWSELA-MANUAL and WIKI-MANUAL ) for training and evaluation.",
"Using the neural CRF sentence aligner, we constructed two largest sentence-aligned datasets to date (NEWSELA-AUTO and WIKI-AUTO ) for text simplification.",
"We showed that a BERT-initalized Transformer trained on our new datasets establishes new state-of-the-art performance for automatic sentence simplification.",
"We thank three anonymous reviewers for their helpful comments, Newsela for sharing the data, Ohio Supercomputer Center (Center, 2012) and NVIDIA for providing GPU computing resources.",
"We also thank Sarah Flanagan, Bohan Zhang, Raleigh Potluri, and Alex Wing for help with data annotation.",
"This research is supported in part by the NSF awards IIS-1755898 and IIS-1822754, ODNI and IARPA via the BETTER program contract 19051600004, ARO and DARPA via the Social-Sim program contract W911NF-17-C-0095, Figure Eight AI for Everyone Award, and Criteo Faculty Research Award to Wei Xu.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, ARO, DARPA or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
] | [
"abstain",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"The majority of work in targeted sentiment analysis has concentrated on finding better methods to improve the overall results.",
"Within this paper we show that these models are not robust to linguistic phenomena, specifically negation and speculation.",
"In this paper, we propose a multi-task learning method to incorporate information from syntactic and semantic auxiliary tasks, including negation and speculation scope detection, to create English-language models that are more robust to these phenomena.",
"Further we create two challenge datasets to evaluate model performance on negated and speculative samples.",
"We find that multi-task models and transfer learning via language modelling can improve performance on these challenge datasets, but the overall performances indicate that there is still much room for improvement.",
"We release both the datasets and the source code at https://github.com/ jerbarnes/multitask_negation_for_targeted_sentiment .",
"Targeted sentiment analysis (TSA) involves jointly predicting entities which are the targets of an opinion, as well as the polarity expressed towards them (Mitchell et al., 2013).",
"The TSA task, which is part of the larger set of fine-grained sentiment analysis tasks, can enable companies to provide better recommendations (Bauman et al., 2017), as well as give digital humanities scholars a quantitative approach to identifying how sentiment and emotions develop in literature (Alm et al., 2005; Kim and Klinger, 2019).",
"Modelling TSA has moved from sequence labeling using conditional random fields (CRFs) (Mitchell et al., 2013) or Recurrent Neural Networks (RNN) (Zhang et al., 2015a; Katiyar and Cardie, 2016; Ma et al., 2018), to Transformer The authors contributed equally.",
"models (Hu et al., 2019).",
"However, all these improvements have concentrated on making the best of the relatively small task-specific datasets.",
"As annotation for fine-grained sentiment is difficult and often has low inter-annotator agreement (Wiebe et al., 2005; vrelid et al., 2020), this data tends to be small and of varying quality.",
"This lack of high-quality training data prevents TSA models from learning complex, compositional linguistic phenomena.",
"For sentence-level sentiment classification, incorporating compositional information from relatively small amounts of negation or speculation data improves both robustness and general performance (Councill et al., 2010; Cruz et al., 2016; Barnes et al., 2020).",
"Furthermore, transfer learning via language-modelling also improves fine-grained sentiment analysis (Hu et al., 2019; Li et al., 2019b).",
"In this paper, we wish to explore two research questions : 1. Does multi-task learning of negation and speculation lead to more robust targeted sentiment models?",
"2. Does transfer learning based on language-modelling already incorporate this information in a way that is useful for targeted sentiment models?",
"We explore a multi-task learning ( MTL ) approach to incorporate auxiliary task information in targeted sentiment classifiers in English in order to investigate the effects of negation and speculation in detail, we also annotate two new challenge datasets which contain negated and speculative examples.",
"We find that the performance is negatively affected by negation and speculation, but MTL and transfer learning ( TL ) models are more robust than single task learning ( STL ).",
"TL reduces the improvements of MTL, suggesting that TL is similarly effective at learning negation and speculation.",
"The overall performance on the challenge datasets, however, confirms that there is still room for improvement.",
"The contributions of the paper are the following:",
"i) we introduce two English challenge datasets annotated for negation and speculation,",
"ii) we propose a multi-task model to incorporate negation and speculation information and evaluate it across four English datasets,",
"iii) Finally, using the challenge datasets, we show the quantitative effect of negation and speculation on TSA.",
"Fine-grained sentiment analysis is a complex task which can be broken into four subtasks (Liu, 2015):",
"i) opinion holder extraction,",
"ii) opinion target extraction,",
"iii) opinion expression extraction,",
"iv) and resolving the polarity relationship between the holder, target, and expression.",
"From these four subtasks, targeted sentiment analysis (TSA) (Jin and Ho, 2009; Chen et al., 2012; Mitchell et al., 2013) reduces the fine-grained task to only the second and final subtasks, namely extracting the opinion target and the polarity towards it.",
"English TSA datasets include MPQA (Wiebe et al., 2005), the SemEval Laptop and Restaurant reviews (Pontiki et al., 2014, 2016), and Twitter datasets (Mitchell et al., 2013; Wang et al., 2017).",
"Further annotation projects have led to review datasets for Arabic, Dutch, French, Russian, and Spanish (Pontiki et al., 2016) and Twitter datasets for Spanish (Mitchell et al., 2013) and Turkish (Pontiki et al., 2016).",
"Prior work has also explored the effects of different phenomena on TSA through error analysis and challenge datasets.",
"Wang et al. (2017), Xue and Li (2018), and Jiang et al. (2019) showed the difficulties of polarity classification of targets on texts with multiple different polarities through the distinct sentiment error splits, the hard split, and the MAMS challenge dataset respectively.",
"Both Kaushik et al. (2020) and Gardner et al. (2020) augment document sentiment datasets by asking annotators to create counterfactual examples for the IMDB dataset.",
"More recently, Ribeiro et al. (2020) showed how sentence-level sentiment models are affected by various linguistic phenomena including negation, semantic role labelling, temporal changes, and name entity recognition.",
"Previous approaches to modelling TSA have often relied on general sequence labelling models, e.",
"g.",
"CRFs (Mitchell et al., 2013), probabilistic graphical models (Klinger and Cimiano, 2013), RNNs (Zhang et al., 2015b; Ma et al., 2018), and more recently pretrained Transformer models (Li et al., 2019b).",
"Multi-task and transfer learning The main idea of MTL (Caruana, 1993) is that a model which receives signal from two or more correlated tasks will more quickly develop a useful inductive bias, allowing it to generalize better.",
"This approach has gained traction in NLP, where several benchmark datasets have been created (Wang et al., 2019b,a).",
"Under some circumstances, MTL can also be seen as a kind of data augmentation, where a model takes advantage of extra training data available in an auxiliary task to improve the main task (Kshirsagar et al., 2015; Plank, 2016).",
"Much of MTL uses hard parameter sharing (Caruana, 1993), which shares all parameters across some layers of a neural network.",
"When the main task and auxiliary task are closely related, this approach can be an effective way to improve model performance (Collobert et al., 2011; Peng and Dredze, 2017; Martnez Alonso and Plank, 2017; Augen-stein et al., 2018), although it is often preferable to make predictions for low-level auxiliary tasks at lower layers of a multi-layer MTL setup (S-gaard and Goldberg, 2016), which we refer to as hierarchical MTL .",
"Transfer learning methods (Mikolov et al., 2013; Peters et al., 2018a; Devlin et al., 2019) can leverage unlabeled data, but require training large models on large amounts of data.",
"However, it seems even these models can be sensitive to negation (Ettinger, 2020; Ribeiro et al., 2020; Kassner and Schtze, 2020) Specific to TSA, previous research has used MTL to incorporate document-level sentiment (He et al., 2019), or to jointly learn to extract opinion expressions (Li et al., 2019b; Chen and Qian, 2020).",
"Negation and Speculation Detection As negation is such a common linguistic phenomenon and one that has a direct impact on sentiment, previous work has shown that incorporating negation information is crucial for accurate sentiment prediction.",
"Feature-based approaches did this by including features from negation detection modules (Das and Chen, 2007; Councill et al., 2010; Lapponi et al., 2012), while it has now become more common to assume that neural models learn negation features in an end-to-end fashion (Socher et al., 2013).",
"However, recent research suggests that end-to-end models are not able to robustly interpret the effect of negation on sentiment (Barnes et al., 2019), and that explicitly learning negation can improve sentiment results (Barnes, 2019; Barnes et al., 2020).",
"On the other hand, speculation refers to whether a statement is described as a fact, a possibility, or a counterfact (Saur and Pustejovsky, 2009).",
"Although there are fewer speculation annotated corpora available (Vincze et al., 2008; Kim et al., 2013; Konstantinova et al., 2012), including speculation information has shown promise for improving sentiment analysis at document-level (Cruz et al., 2016).",
"There has, however, been little research on how these phenomena specifically affect fine-grained approaches to sentiment analysis.",
"This is important because, compared to documentor sentence-level tasks where there is often a certain redundancy in sentiment signal, for fine-grained tasks negation and speculation often completely change the sentiment (see Table 2), making their identification and integration within a fine-grained sentiment models essential to resolve.",
"We perform the main experiments on four English language datasets: The Laptop dataset from SemEval 2014 (Pontiki et al., 2014), the Restaurant dataset which combines the SemEval 2014 (Pon-tiki et al., 2014), 2015 (Pontiki et al., 2015), and 2016 (Pontiki et al., 2016), the Multi-aspect Multi-sentiment ( MAMS ) dataset (Jiang et al., 2019), and finally the Multi-perspective Question Answering ( MPQA ) dataset (Wiebe et al., 2005) 1 shows the distribution of the sentiment classes .",
"We take the pre-processed Laptop and Restaurant datasets from Li et al. (2019a), and use the train, dev, and test splits that they provide.",
"We use the NLTK word tokenizer to tokenise the Laptop, Restaurant, and MPQA datasets and Spacy for the MAMS dataset.",
"We choose datasets that differ largely in their domain, size, and annotation style in order to determine if any trends we see are robust to these data characteristics or whether they are instead correlated.",
"We convert all datasets to a targeted setup by extracting only the aspect targets and their polarity.",
"We use the unified tagging scheme 2 following recent work (Li et al., 2019a,b) and convert all data 1 All datasets contain the following three sentiment classes positive, neutral, and negative.",
"to BIOUL format 3 with unified sentiment tags, e.",
"g.",
"B-POS for a beginning tag with a positive sentiment, so that we can cast the TSA problem as a sequence labeling task.",
"The statistics for these datasets are shown in Table 1. MAMS has the largest number of training targets (11,162), followed by Restaurant (3,896), Laptop (2,044) and finally MPQA has the fewest (1,264).",
"MPQA, however, has the longest average targets (6.3 tokens) compared to 1.3-1.5 for the other datasets.",
"This derives from the fact that entire phrases are often targets in MPQA.",
"Finally, due to the annotation criteria, the MAMS data also has the highest number of sentences with multiple aspects with multiple polarities nearly 100% in train, compared to less than 10% for Restaurant.",
"Although negation and speculation are prevalent in the original data negation and speculation occur in 13-25% and 9-20% of the sentences, respectively it is difficult to pry apart improvement on the original data with improvement on these two phenomena.",
"Therefore, we further annotate the dev and test set for the Laptop and Restaurant datasets 4 , and when possible 5 , insert negation and speculation cues into sentences lacking them, which we call Laptop Neg , Laptop Spec , Restaurant Neg , and Restaurant Spec .",
"Inserting negation and speculation cues often leads to a change in polarity from the original annotation, as shown in the example in Table 2. We finally keep all sentences that contain a negation or speculation cue, including those that occur naturally in the data.",
"As this process could introduce errors regarding the polarity expressed towards the targets, we doubly annotate the polarity for 50 sentences from the original dev data, the negated dev data, and the speculation dev data and calculate Cohen's Kappa scores.",
"The statistics and inter-annotator agreement scores (IAA) are shown in Table 1 6 .",
"The new annotations have similarly high IAA scores (0.66-0.70) to the original data 3 BIOUL format tags each token as either B : beginning token, I : inside token, O : outside token, U : unit (single token), or L : last token.",
"4 For clarification this is the SemEval 2014 Laptop dataset and the 2014, 2015, and 2016 combined Restaurant dataset.",
"5 While inserting negation into new sentences is quite trivial, as one can always negate full clauses, e.g. It's good It's not true that it's good, adding speculation often requires rewording of the sentence.",
"We did not include sentences that speculation made unnatural.",
"6 Table 7 of Appendix A shows the distribution of the sentiment classes.",
"(0.67-0.71), confirming the quality of the annotations.",
"For the multi-task learning experiments, we use six auxiliary tasks: negation scope detection using the Conan Doyle ( NEGCD ) (Morante and Daelemans, 2012), both negation detection ( NEGSFU ) and speculation detection ( SPEC ) on the SFU NegSpec dataset (Konstantinova et al., 2012), and Universal Part-of-Speech tagging ( UPOS ), Dependency Relation prediction ( DR ) and prediction of full lexical analysis ( LEX ) on the Streusle dataset (Schnei-der and Smith, 2015).",
"We show the train, dev, test splits, as well as the number of labels, label entropy and label kurtosis (Martnez Alonso and Plank, 2017) in Table 3. An example sentence with auxiliary labels is shown in Appendix B. Although it may appear that the SFU dataset is an order of magnitude larger than the Conan Doyle dataset, in reality, most of the training sentences do not contain annotations, leaving similar sized data if these are filtered.",
"Similar to the sentiment data, we convert the auxiliary tasks to BIO format and treat them as sequence labelling tasks.",
"We experiment with a single task baseline (STL) and a hierarchical multi-task model with a skip-connection (MTL), both of which are shown in Figure 1. For the STL model, we first embed a sentence and then pass the embeddings to a Bidirectional LSTM (Bi-LSTM).",
"These features are then concatenated to the input embeddings and fed to the second Bi-LSTM layer, ending with the token-wise sentiment predictions from the CRF tagger.",
"For the MTL model, we additionally use the output of the first Bi-LSTM layer as features for the separate auxiliary task CRF tagger.",
"As seen from Figure 1, the STL model and the MTL main task model use the same the green layers.",
"The MTL additionally uses the pink layer for the auxiliary task, adding less than 3.4% trainable parameters 7 for all auxiliary tasks except LEX, which adds 221.4% due to the large label set (see Table 3).",
"Furthermore, at inference time the MTL model is as efficient as STL, given that it only uses the green layers when predicting the targeted sentiment, of which this is empirically shown in Table 20 of Appendix F. Figure 1: The overall architecture where the STL model contains all of the green layers and the MTL uses the additional pink auxiliary CRF tagger.",
"embeddings (Pennington et al., 2014), as well as TL from Transformer ELMo embeddings (Peters et al., 2018b) 8 .",
"The GloVe embeddings are publicly available and trained on English Wikipedia and Gigaword data.",
"For the MPQA dataset we use the Transformer ELMo from Peters et al. (2018b) 9 which was trained on the 1 billion word benchmark (Chelba et al., 2014).",
"For the MAMS and Restaurant datasets we tuned a Transformer ELMo on 27 million (M) sentences from the 2019 Yelp review dataset 10 , and for the Laptop dataset on 28M sentences 11 from the Amazon electronics reviews dataset (McAuley et al., 2015) 12 .",
"Training these models on large amounts of in-domain data gives superior performance to models trained on more generic data, e.",
"g.",
"BERT (Devlin et al., 2019).",
"For all experiments we freeze the embedding layer in order to make the results between GloVe and TL more comparable with respect to the number of trainable parameters.",
"For TL, we learn a summed weighting of all layers 13 , as this is more effective 8 This is a 6 layer transformer model with a bi-directional language model objective that contains 56 million parameters excluding the softmax.",
"In comparison BERT uses a masked language modelling objective and contains 110 and 340 million parameters for the base and large versions (Devlin et al., 2019).",
"than using the last layer (Peters et al., 2018a).",
"For more details on the number of parameters used for each model see Table 19 in Appendix F. Training: For the STL and the MTL models, we tune hyperparameters using AllenTune (Dodge et al., 2019) on the Laptop development dataset.",
"We then use the best hyperparameters on the Laptop dataset for all the STL and MTL experiments, in order to reduce hyperparameter search.",
"We follow the result checklist for hyperparameter searches from (Dodge et al., 2019) (details found in Tables 17 and 18 of Appendix E along with Figure 2 showing the expected validation scores from the hyperparameter tuning).",
"For the MTL model, a single epoch involves training for one epoch on the auxiliary task and then an epoch on the main task, as previous work has shown training the lower-level task first improves overall results (Hashimoto et al., 2017).",
"In this work, we assume all of the auxiliary training tasks are conceptually lower than TSA.",
"Evaluation: For all experiments, we run each model five times (Reimers and Gurevych, 2017) and report the mean and standard derivation.",
"We also take the distribution of the five runs to perform significance testing (Reimers and Gurevych, 2018), eliminating the need for Bonferroni correction.",
"Following Dror et al. (2018), we use the nonparametric Wilcoxon signed-rank test (Wilcoxon, 1945) for the F 1 metrics and a more powerful parametric Welch's t-test (Welch, 1947) for the accu-transformer layers and the output from the non-contextualised character encoder, thus in total 7 layers are weighted and summed.",
"We report the F 1 score for the target extraction ( F 1 -a ), macro F 1 ( F 1 -s ) and accuracy score ( acc-s ) for the sentiment classification for all targets that have been correctly identified by the model, and finally the F 1 score for the full targeted task ( F 1 -i ), following He et al. (2019).",
"Unlike He et al. (2019), we do not use any of the samples that contain the conflict label on Laptop or Restaurant.",
"The test results for the main F 1 -i metric are reported in Table 4, and the other metrics for the test split are reported in Tables 9 and 10 of Appendix C. The MTL models outperform STL on four of the eight experiments (see Table 4), although the STL TL model is significantly better than the majority of MTL models on MPQA.",
"Of the MTL models, NEGCD + GloVe performs best on MPQA (18.88), DR + GloVe is best on Restaurant (66.06), and LEX is the best model on Laptop (54.85) with GloVe and Restaurant (71.77) with TL.",
"The TL models consistently outperform the GloVe models by an average of 5.4 percentage points (pp) across all experiments and give the best performance on all datasets.",
"The results suggest that transfer learning reduces the beneficial effects of MTL.",
"At the same time, the results suggest that MTL does not hurt the STL models, as no STL model is significantly better than all of the MTL models across the datasets and embeddings for the F 1 -i metric.",
"In order to isolate the effects of negation and speculation on the results, we test all models trained on the original Laptop and Restaurant datasets on the Laptop Neg , Restaurant Neg , Laptop Spec , and Restaurant Spec test splits.",
"Tables 5 and 6 show the results for negation and speculation, respectively.",
"The results for the dev split and the F 1 -s of the test split are shown in Appendix D. Firstly, all models perform comparatively worse on the challenge datasets, dropping an average of 24 and 25 pp on F 1 -i on the negation and speculation data, respectively.",
"Nearly all of this drop comes from poorer classification ( acc-s , F 1 s ), while target extraction ( F 1 -a ) is relatively stable.",
"This demonstrates the importance of resolving negation and speculation for TSA and the usefulness of the annotated data to determine these effects.",
"On Laptop Neg and Restaurant Neg incorporating negation auxiliary tasks gives an average improvement of 3.8 pp on the F 1 -i metric when using GloVe embeddings.",
"More specifically, MTL with negation improves the sentiment classification scores, but does not help extraction.",
"This makes sense conceptually, as negation has little effect on whether or not a word is part of a sentiment target.",
"Instead, 14 These findings also generalise to the results on the development splits, shown in Tables 11 and 12 within Appendix C. NEGCDDR LEX NEGSFUSPEC UPOS STLL a p t op N e g sentiment GloVe 42.80 (2.48) 38.54 (cid:63) (0.98) 38.72 (cid:63) (3.00) 45.26 (1.45) 41.23 (cid:63) (2.90) 38.92 (cid:63) (1.74) 38.32 (cid:63) (1.73) TL 48.49 (2.32) 45.90 (3.54) 45.93 (2.13) 47.04 (2.93) 45.71 (2.19) 46.29 (2.03) 46.50 (3.30) extraction GloVe 75.36 (cid:63) (0.91) 76.05 (cid:63) (1.20) 78.68 (0.97) 75.04 (cid:63) (1.92) 76.14 (2.06) 77.98 (1.41) 76.52 (cid:63) (1.24) TL 82.39 (1.34) 82.95 (1.36) 83.47 (1.26) 83.25 (1.80) 82.24 (cid:63) (1.39) 82.58 (1.58) 82.10 (1.11) targeted GloVe 32.28 (2.23) 29.30 (cid:63) (0.54) 30.47 (cid:63) (2.45) 33.96 (1.30) 31.36 (cid:63) (1.78) 30.36 (cid:63) (1.56) 29.33 (cid:63) (1.47) TL 39.95 (2.02) 38.08 (3.13) 38.35 (2.01) 39.18 (2.88) 37.59 (1.99) 38.23 (1.89) 38.14 (2.23) R e s t a u r a n t N e g sentiment GloVe 53.41 (4.28) 49.78 (cid:63) (2.10) 47.69 (cid:63) (1.19) 56.01 (1.07) 48.86 (cid:63) (3.94) 50.58 (cid:63) (2.18) 49.86 (cid:63) (1.77) TL 60.69 (1.91) 62.61 (2.11) 60.80 (3.20) 60.45 (2.04) 61.70 (1.42) 60.06 (2.13) 60.66 (2.24) extraction GloVe 80.97 (1.47) 82.22 (1.29) 82.15 (0.74) 80.74 (1.58) 81.53 (0.32) 81.92 (0.91) 80.97 (1.14) TL 83.04 (1.26) 82.94 (cid:63) (0.97) 84.10 (0.86) 83.94 (1.67) 83.48 (1.59) 82.33 (cid:63) (1.37) 83.50 (1.16) targeted GloVe 43.28 (3.95) 40.95 (cid:63) (2.31) 39.19 (cid:63) (1.23) 45.22 (0.80) 39.85 (cid:63) (3.35) 41.43 (cid:63) (1.87) 40.38 (cid:63) (1.82) TL 50.40 (2.03) 51.92 (1.64) 51.15 (3.04) 50.75 (2.10) 51.49 (0.86) 49.45 (2.01) 50.68 (2.52) Table 5: Sentiment ( acc-s ), extraction ( F 1 -a ) and full targeted ( F 1 -i ) results for Laptop Neg and Restaurant Neg test split, where the values represent the mean (standard deviation) of five runs with a different random seeds.",
"jointly learning dependency relations (DR) and full lexical analysis (LEX) improve extraction results.",
"Furthermore, when using TL instead of GloVe embeddings, the best MTL model (NEGSFU ) does marginally beat the STL TL equivalent on average, indicating that multi-task learning is still able to contribute something to transfer learning.",
"On Laptop Spec and Restaurant Spec MTL models improve results when using GloVe embeddings, with the additional speculation (SPEC) and dependency relation (DR) data improving the F 1 -i metric by 0.5 pp and 0.49 pp respectively on average.",
"However, with TL, MTL only leads to benefits on the Restaurant dataset.",
"Unlike the negation data results, the speculation results appear to be helped more by syntactic auxiliary tasks like DR than semantic tasks like NEGCD and to some extent NEGSFU .",
"The best MTL GloVe models on the original datasets (LEX 15 and DR, respectively) also outper-15 The development F 1 -i result for LEX on the Laptop form the STL GloVe models on the challenge data, indicating that MTL leads to greater robustness.",
"When comparing the STL model using GloVe and TL on average the model improves by 9.55 pp on the negation dataset compared to 3.65 pp for the speculation suggesting that transfer learning is less effective for speculation.",
"In this paper, we have compared the effects of MTL using various auxiliary tasks for TSA and have created a negation and speculation annotated challenge dataset 16 for TSA in order to isolate the effects of MTL.",
"We show that TSA methods are drastically affected by negation and speculation effects in the data.",
"These effects can be similarly reduced by either incorporating auxiliary task information into the model through MTL or through transfer learning.",
"Additionally, MTL of negation dataset is worse than STL by 0.05 but for all other F 1 -i Laptop results LEX is better than STL.",
"16 https://bit.ly/312kwpP NEGCDDR LEX NEGSFUSPEC UPOS STLL a p t op S p e c sentiment GloVe 34.32 (1.86) 35.67 (1.00) 36.75 (1.91) 35.98 (2.05) 36.74 (1.64) 35.57 (1.31) 34.67 (1.40) TL 35.42 (3.54) 34.76 (1.63) 35.06 (1.97) 34.08 (cid:63) (0.40) 35.03 (2.36) 35.01 (1.04) 35.97 (1.45) extraction GloVe 74.77 (cid:63) (1.54) 74.01 (cid:63) (1.93) 77.80 (1.34) 75.99 (2.48) 73.39 (cid:63) (1.74) 76.80 (0.99) 75.01 (1.93) TL 80.11 (cid:63) (1.40) 80.77 (1.23) 81.47 (cid:63) (0.50) 83.14 (2.22) 81.49 (1.24) 81.07 (1.38) 79.84 (cid:63) (0.58) targeted GloVe 25.67 (cid:63) (1.62) 26.39 (cid:63) (0.60) 28.59 (1.42) 27.33 (1.70) 26.95 (1.07) 27.31 (0.82) 26.01 (cid:63) (1.26) TL 28.36 (2.81) 28.09 (1.68) 28.56 (1.60) 28.33 (0.52) 28.54 (1.83) 28.37 (0.77) 28.72 (1.20) R e s t a u r a n t S p e c sentiment GloVe 62.38 (3.75) 64.01 (2.72) 63.44 (2.21) 63.33 (1.87) 64.30 (3.14) 63.15 (3.38) 63.94 (1.84) TL 67.23 (1.08) 68.98 (1.17) 69.70 (2.51) 67.62 (1.58) 66.93 (1.79) 68.13 (1.25) 68.17 (2.44) extraction GloVe 75.53 (1.03) 76.40 (1.90) 75.75 (1.18) 75.66 (1.65) 75.29 (0.77) 75.87 (0.97) 75.58 (1.48) TL 77.92 (1.36) 77.84 (0.84) 79.10 (1.48) 78.76 (1.27) 78.20 (1.80) 77.15 (1.92) 77.61 (1.87) targeted GloVe 47.14 (3.24) 48.94 (3.06) 48.07 (2.22) 47.90 (1.25) 48.41 (2.48) 47.94 (3.14) 48.35 (2.32) TL 52.39 (1.18) 53.69 (0.69) 55.15 (2.70) 53.25 (1.10) 52.34 (1.85) 52.55 (1.22) 52.94 (2.99) Table 6: Sentiment ( acc-s ), extraction ( F 1 -a ) and full targeted ( F 1 -i ) results for Laptop Spec and Restaurant Spec test split, where the values represent the mean (standard deviation) of five runs with a different random seeds.",
"can lead to small improvements when combined with transfer learning.",
"Returning to the two original research questions, we can conclude that in general",
"1) MTL using negation (speculation) as an auxiliary task does make TSA models more robust to negated (speculative) samples and",
"2) transfer learning seems to incorporate much of the same knowledge.",
"Additionally, incorporating syntactic information as an auxiliary task within MTL creates models that are more robust to both negation and speculation.",
"Neither MTL nor TL are currently guarantees for improved performance 17 .",
"Additionally, the results from the challenge datasets indicate that different auxiliary tasks improve the performance of different subtasks of TSA.",
"This may suggest that the target extraction and sentiment classification tasks should not be treated as a collapsed labelling task, as the sentiment and extraction tasks are too dissimilar (Hu et al., 2019).",
"Future work should consider 17 Compare the performance of LEX using GloVe (28.59) to when it uses TL (28.56) in Table 6 for the Laptop dataset.",
"using pipeline or joint approaches, where each subtask can be paired with the most beneficial auxiliary tasks.",
"This decoupling could also allow MTL and transfer learning to compliment each other more.",
"Finally, in order to improve reproducibility and to encourage further work, we release the code 18 , dataset, and trained models associated with this paper, hyperparameter search details with compute infrastructure (Appendix E), number of parameters and runtime details (Appendix F), and further detailed dev and test results (appendices C and D), in line with the result checklist from Dodge et al. (2019).",
"This work has been carried out as part of the SANT project (Sentiment Analysis for Norwegian Text), funded by the Research Council of Norway (grant number 270908).",
"Andrew has been funded by Lan-18 https://github.com/jerbarnes/ multitask_negation_for_targeted_sentiment caster University by an EPSRC Doctoral Training Grant.",
"The authors thank the UCREL research centre for hosting the models created from this research."
] | [
"abstain",
"result",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"Electronic Medical Records (EMRs) have become key components of modern medical care systems.",
"Despite the merits of EMRs, many doctors suffer from writing them, which is time-consuming and tedious.",
"We believe that automatically converting medical dialogues to EMRs can greatly reduce the burdens of doctors, and extracting information from medical dialogues is an essential step.",
"To this end, we annotate online medical consultation dialogues in a window-sliding style, which is much easier than the sequential labeling annotation.",
"We then propose a Medical Information Extractor (MIE) towards medical dialogues.",
"MIE is able to extract mentioned symptoms, surgeries, tests, other information and their corresponding status.",
"To tackle the particular challenges of the task, MIE uses a deep matching architecture, taking dialogue turn-interaction into account.",
"The experimental results demonstrate MIE is a promising solution to extract medical information from doctor-patient dialogues.",
"1 1 Introduction With the advancement of the informatization process of the medical system, Electronic Medical Records (EMRs) are required by an increasing number of hospitals all around the world.",
"Compared with conventional medical records, EMRs are easy to save and retrieve, which bring considerable convenience for both patients and doctors.",
"Furthermore, EMRs allow medical researchers to investigate the implicit contents included, such as epidemiologic study and patient cohorts finding.",
"Despite the advantages, most doctors complain that writing EMRs makes them exhausted (Wachter and Goldsmith, 2018).",
"According to the study of Sin-sky et al. (2016), physicians spend nearly two hours doing administrative work for every hour of face-time with patients, and the most time-consuming aspect is inputting EMRs.",
"We believe that automatically converting doctor-patient dialogues into EMRs can effectively remove the heavy burdens of doctors, making them more deliberate to communicate with their patients.",
"One straightforward approach is the end-to-end learning, where more supervised data, i.e., dialogue-EMR pairs are needed.",
"Unfortunately, such data is hard to acquire in medical domain due to the privacy policy.",
"In this paper, We focus on extracting medical information from dialogues, which we think is an essential step for EMR generation.",
"Extracting information from medical dialogues is an emerging research field, and there are only few previous attempts.",
"Finley et al. (2018) proposed an approach that consists of five stages to convert a clinical conversation to EMRs, but they do not describe the detail method.",
"Du et al. (2019) also focused on extracting information from medical dialogues, and successfully defined a new task of extracting 186 symptoms and their corresponding status.",
"The symptoms were relatively comprehensive, but they did not concern other key information like surgeries or tests.",
"Lin et al. (2019) collected online medical dialogues to perform symptom recognition and symptom inference, i.e., inference the status of the recognized symptoms.",
"They also used the sequential labeling method, incorporated global attention and introduced a static symptom graph.",
"There are two main distinctive challenges for tackling doctor-patient dialogues:",
"a) Oral expres-Dialogue Window Annotated Labels Patient: Doctor, could you please tell me is it premature beat?",
"sions are much more diverse than general texts.",
"There are many medical terms in the dialogue, but many of them are not uttered formally, which will lead to performance degradation of conventional Natural Language Processing (NLP) tools.",
"b) Available information is scattered in various dialogue turns, thus the interaction between turns should be also considered.",
"In order to meet these challenges, we first annotate the dialogues in a window-sliding style, as illustrated in Figure 1. Then, we propose MIE, a M edical I nformation E xtractor constructed on a deep matching model.",
"We believe our annotation method could put up with informal expressions, and the proposed neural matching model is able to harness the turn-interactions.",
"We collect doctor-patient dialogues from a popular Chinese online medical consultation website, Chunyu-Doctor 2 , where medical dialogues are in text format.",
"We focus on the cardiology domain, because there are more inquiries and less tests than other departments.",
"The annotation method considers both effectiveness and feasibility.",
"We define four main categories, including symptoms, tests, surgeries and other information, and we further define frequent items in the categories and their corresponding status at the same time.",
"There are two merits of our annotation method:",
"a) the annotation is much easier than the sequential labeling manner and does not need the labelers to be medical experts;",
"b) we can annotate the circumstances that a single label is expressed by multiple turns.",
"We totally annotate 1,120 dialogues with 18,212 2 https://www.chunyuyisheng.com segmented windows and obtain more than 40k labels.",
"We then develop MIE constructed on a novel neural matching model.",
"MIE model consists of four main components, namely encoder module, matching module, aggregate module and scorer module.",
"We conduct extensive experiments, and MIE achieves a overall F-score of 69.28, which indicates our proposed approach is a promising solution for the task.",
"We propose a new dataset, annotating 1,120 doctor-patient dialogues from online consultation medical dialogues with more than 40k labels.",
"The dataset will help the following researchers.",
"We propose MIE, a medical information extractor based on a novel deep matching model that can make use of the interaction between dialogue turns.",
"MIE achieves a promising overall F-score of 69.28, significantly surpassing several competitive baselines.",
"Extracting information from medical texts is a long-term objective for both biomedical and NLP community.",
"For example, The 2010 i2b2 challenge provides a popular dataset still used in many recent researches (Uzuner et al., 2011).",
"Three tasks were presented: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments.",
"Extracting medical information from dialogues just gets started.",
"Finley et al. (2018) proposed a pipeline method to generate EMRs.",
"The approach contains five steps: dialogue role labeling, Automatic Speech Recognition (ASR), knowledge extraction, structured data processing and Natural Language Generation (NLG) (Murty and Kabadi, 1987).",
"The most important part is knowledge extraction, which uses dictionary, regular expression and other supervised machine learning methods.",
"However, the detailed explanations are left out, which make us hard to compare with them.",
"Du et al. (2019) aimed at generating EMRs by extracting symptoms and their status.",
"They defined 186 symptoms and three status, i.e., experienced, not experienced and other.",
"They proposed two models to tackle the problem.",
"Span-Attribute Tagging Model first predicted the span of a symptom, and then used the context features to further predict the symptom name and status.",
"The seq2seq model took k dialogue turns as input, and then directly generated the symptom name and status.",
"They collected incredible 90k dialogues and annotated 3k of them, but the dataset is not public.",
"The most similar work to ours is (Lin et al., 2019), which also annotated Chinese online medical dialogues.",
"Concretely, they annotated 2,067 dialogues with the BIO (begin-in-out) schema.",
"There are two main components, namely symptom recognition and symptom inference in their approach.",
"The former utilized both document-level and corpus-level attention enhanced Conditional Random Field (CRF) to acquire symptoms.",
"The letter serves determining the symptom status.",
"Our work differs from (Du et al., 2019) and (Lin et al., 2019) mainly in the following two points:",
"a) we only extract 45 symptom items, but the status are more detailed, furthermore, we extract surgeries, tests and other information;",
"b) we use different extracting method.",
"Since the annotation system is different, our approach does not need the sequential labeling, which relieves the labeling work.",
"We collect doctor-patient dialogues from a Chinese medical consultation website, Chunyu-Doctor.",
"The dialogues are already in text format.",
"We select cardiology topic consultations, since there are more inquiries, while dialogues of other topics often depend more on tests.",
"A typical consultation dialogue is illustrated in Figure 1. The principle of the annotation is to label useful information as comprehensive as possible.",
"A commonly utilized annotation paradigm is sequential labeling, where the medical entities are labeled using BIO tags (Du et al., 2019; Lin et al., 2019; Collobert et al., 2011; Huang et al., 2015; Ma and Hovy, 2016).",
"However, such annotation methods cannot label information that",
"a) expressed by multiple turns and",
"b) not explicitly or not consecutively expressed.",
"Such situations are not rare in spoken dialogues, as can be seen in Figure 1. To this end, we use a window-to-information annotation method instead of sequential labeling.",
"As listed in Table 1, we define four main categories, and for each category, we further define frequent items.",
"The item quantity of symptom , surgery , test and other info is 45, 4, 16 and 6, respectively.",
"In medical dialogues, status is quite Category Item Status Symptom BackachePerspirationHiccupsNauseaCyanosisFeverFatigueAbdominaldiscomfort... patient-positive (appear) patient-negative (absent) doctor-positive (diagnosed) doctor-negative (exclude) unknown Surgery Interventional treatment Radiofrequency ablation Heart bypass surgery Stent implantation patient-positive (done) patient-negative (not done) doctor-positive(suggest)doctor-negative(deprecated)unknown Test B-mode ultrasonography CT examination CT angiography CDFIBlood pressure measure-mentUltrasonographyMRIThyroidfunction test Treadmill test ... patient-positive(done)patient-negative(notdone)doctor-positive(suggest)doctor-negative(deprecated)unknown Other info SleepDiet Mental condition DefecationSmokingDrinking patient-positive (normal) patient-negative (abnormal) unknown Table 1: The detailed annotation labels of the dataset.",
"crucial that cannot be ignored.",
"For example, for a symptom, the status of appearance or absence is opposite for a particular diagnose.",
"So it is necessary to carefully define status for each category.",
"The status options vary with different categories, but we use unified labels for clarity.",
"The exact meanings of the labels are also explained in Table 1. The goal of annotation is to label all the pre-defined information mentioned in the current dialogue.",
"As the dialogues turn to be too long, it is difficult for giving accurate labels when finishing reading them.",
"Thus, we divide the dialogues into pieces using a sliding window.",
"A window consists of multiple consecutive turns of the dialogue.",
"It is worth noting that the window-sliding annotations can be converted into dialogue-based ones like dialogue state tracking task (Mrksic et al., 2017), the later annotation state will overwrite the old one.",
"Here, the sliding window size is set to 5 as Du et al. (2019) did, because this size allows the included dialogue turns contain proper amount of information.",
"For windows with less than 5 utterances, we pad them at the beginning with empty strings.",
"The sliding step is set to 1. We invite three graduate students to label the dialogue windows.",
"The annotators are guided by two physicians to ensure correctness.",
"The segmented windows are randomly assigned to the annotators.",
"In all, we annotate 1,120 dialogues, leading to 18,212 windows.",
"We divide the data into train/develop/test sets of size 800/160/160 for dialogues and 12,931/2,587/2,694 for windows, respectively.",
"In total, 46,151 labels are annotated, averaging 2.53 labels in each window, 41.21 labels in each dialogue.",
"Note that about 12.83% of windows have no gold labels, i.e., there is no pre-defined information in those windows.",
"The distribution of the labels is shown in Table 2. The status distribution is shown in Table 3. The annotation consistency, i.e., the cohen's kappa coefficient (Fleiss and Cohen, 1973) of the labeled data is 0.91, which means our annotation approach is feasible and easy to follow.",
"Dialogue Window Symptom Surgery Test Other info Train 800 12931 21420 839 8879 1363 Dev 160 2587 4254 119 1680 259 Test 160 2694 4878 264 1869 327 Total 1120 18212 30552 1222 12428 1949 Table 2: The detailed annotation statistics of the dataset.",
"Patient-pos Patient-neg Doctor-pos Doctor-neg Unknown Symptom 15119 1782 1655 910 11086 Surgery 169 48 698 10 297 Test 5589 303 4443 44 2049 Other info 550 1399 -1505 Table 3: The distribution of status over all labels.",
"We evaluate the extracted medical information results as ordinary information extraction task does, i.e., Precision, Recall and F-measure.",
"To further discover the model behavior, we set up three evaluation metrics from easy to hard.",
"Category performance is the most tolerant metric.",
"It merely considers the correctness of the category.",
"Item performance examines the correctness of both category and item, regardless of status.",
"Full performance is the most strict metric, meaning that category, item and the corresponding status must be completely correct.",
"Window-level : We evaluate the results of each segmented window, and report the micro-average of all the test windows.",
"Some windows have no gold labels, if the prediction on a window with no gold labels is also empty, it means the model performs well, so we set the Precision, Recall and F-measure to 1, otherwise 0.",
"Dialogue-level : First we merge the results of the windows that belong to the same dialogue.",
"For labels that are mutually exclusive, we update the old labels with the latest ones.",
"Then we evaluate the results of each dialogue, and finally report the micro-average of all the test dialogues.",
"In this section, we will elaborate the proposed MIE model, a novel deep matching neural network model.",
"Deep matching models are widely used in multiple natural language processing tasks such as machine reading comprehension (Seo et al., 2017; Yu et al.), question answering (Yang et al., 2016) and dialogue generation (Zhou et al., 2018; Wu et al., 2017).",
"Compared with classification models, matching models are able to introduce more information of the candidate side and promote interaction between both ends.",
"The architecture of MIE is shown in Figure 2. There are four main components, namely encoder module, matching module, aggregate module and scorer module.",
"The input of MIE is a doctor-patient I can't breathe out.",
"dialogue window, and the output is the predicted medical information.",
"The encoder is implemented by Bi-LSTM (Hochre-iter and Schmidhuber, 1997) with self-attention (Vaswani et al., 2017).",
"Let the input utterance be X = ( x 1 , x 2 , ..., x l ) , the encoder works as follows: H = BiLSTM( X ) a [ j ] = W H [ j ] + b p = softmax( a ) c = X j p [ j ] H [ j ] (1) We denote H, c = Encoder( X ) for brevity.",
"H consists contextual representations of every token in input sequence X , and c is a single vector that compresses the information of the entire sequence in a weighted way.",
"We denote a window with n utterances as { U [1] , ...U [ n ] } .",
"For a candidate consists of category, item and status like Symptom:Heart failure (patient-positive) , we split it to category-item pair Symptom:Heart failure denoted by V and status patient-positive denoted by S .",
"To introduce more oral information, we also add item-related colloquial expressions collected during the annotation to the end of V .",
"Having defined the basic structure of the encoder, we now build representations for utterances U in the dialogue window, and the candidate category-item pair V and its status S : H uttc [ i ] , c uttc [ i ] = Encoder uttc ( U [ i ]) H utts [ i ] , c utts [ i ] = Encoder utts ( U [ i ]) H canc , c canc = Encoder canc ( V ) H cans , c cans = Encoder cans ( S ) (2) Where the superscript utt and can represents utterance encoder and candidate encoder respectively, the subscript c and s represents category encoder and status encoder respectively, and i 2 [1 , n ] is the index of utterance in the dialogue window.",
"All the candidates will be encoded in this step, but we only illustrate one in the figure and equations for brevity.",
"Note that U , V , S is encoded with encoders differ from utterance to candidate and from category to status in order to make each encoder concentrate on one specific type (category-specific and status-specific) of information.",
"In this step, the category-item representation is treated as a query in attention mechanism to calculate the attention values towards original utterances.",
"Then we can obtain the category-specific representation of utterance U [ i ] as q c [ i ] .",
"a c [ i, j ] = c canc H uttc [ i, j ] p c [ i ] = softmax( a c [ i ]) q c [ i ] = X j p c [ i, j ] H utt c [ i, j ] (3) Meanwhile, the status representation is treated as another query to calculate the attention values towards original utterances.",
"Then we can obtain the status-specific representation of utterance U [ i ] as q s [ i ] .",
"a s [ i, j ] = c cans H utts [ i, j ] p s [ i ] = softmax( a s [ i ]) q s [ i ] = X j p s [ i, j ] H utts [ i, j ] (4) Where [ i, j ] denotes the j th word in the i th utterance.",
"The goal of this step is to capture the most relevant information from each utterance given a candidate.",
"For example, if the category-item pair of the candidate is Symptom: Heart failure , the model will assign high attention values to the mentions of heart failure in utterances.",
"If the status of the candidate is patient-positive , the attention values of expressions like I have, I've been diagnosed will be high.",
"So the matching module is important to determine the existence of a category-item pair and status related expressions.",
"The matching module introduced above have captured the information of the existence of category-item pairs and status.",
"To know whether a candidate is expressed in a dialogue window, we need to obtain the category-item pair information and its status information together.",
"In particular, we need to match every category-item representation q c [ i ] with q s [ i ] .",
"Sometimes the category-item pair information and its status information appear in the same utterance.",
"But sometimes, they will appear in different utterances.",
"For example, many question-answer pairs are adjacent utterances.",
"So we need take the interactions between utterances into account.",
"Based on this intuition, we define two kinds of strategies to get two different models.",
"MIE-single: The first strategy assumes that the category-item pair information and its status information appear in the same utterance.",
"The representation of the candidate in the i th utterance is a simple concatenation of q c [ i ] and q s [ i ] : f [ i ] = concat( q c [ i ] , q s [ i ]) (5) Where f [ i ] consists information of category-item pair and its status which can be used to predict the score of the related candidate.",
"The model only considers the interaction within a single utterance.",
"The acquired representations are independent from each other.",
"This model is called MIE-single.",
"MIE-multi: The second strategy considers the interaction between the utterances.",
"To obtain the related status information of other utterances, we treat q c [ i ] as a query to get the attention values towards the representations of status, i.e., q s .",
"Then we can obtain the candidate representation of the utterance: a [ i, k ] = q c [ i ] TW q s [ k ] p [ i ] = softmax( a [ i ]) e q s [ i ] = X k p [ i, k ] q s [ k ] f [ i ] = concat( q c [ i ] , e q s [ i ]) (6) Where W is a learned parameter, and e q s is the new representation of the status, containing the relative information of other utterances.",
"The utterance order is an important clue in a dialogue window.",
"For example, the category-item pair information can hardly related to status information whose utterance is too far.",
"In order to capture this kind of information, we also take utterance position into account.",
"Concretely, we add positional encoding (Vaswani et al., 2017) to each q c and q s at the beginning.",
"We denote this model as MIE-multi.",
"The output of the aggregate module contains the information of a entire candidate, including category-item and status information.",
"The output of the aggregate module is fed into a scorer module.",
"We use each utterance's feature f [ i ] to score the candidate, as it is already the candidate-specific representation.",
"The highest score of all the utterances in the window is the candidate's final score: s utt [ i ] = feedforward( f [ i ]) y = sigmoid(max( s utt [ i ])) (7) Where feedforward is a 4 layer full-connection neural network.",
"The loss function is the cross entropy loss defined as follows:",
"L = 1 KL X k X l \u0000 y kl log( b y kl )+ (1 \u0000 y kl ) log(1 \u0000 y kl ) (8)",
"The superscript k denote the index of the training sample, and l is the index of the candidate.",
"K and L are the number of samples and candidates respectively.",
"b y kl is the true label of the training sample.",
"There could be more than one answer in a dialogue window.",
"In the inference phase, we reserve all the candidates whose matching score is higher than the threshold of 0.5.",
"Since the training process is performed in the window size, the inference phase should be the same situation.",
"We also obtain the dialogue-level results by updating the results of windows as aforementioned.",
"In this section, we will conduct experiments on the proposed dataset.",
"It is worth to note that we are not going to compare MIE with (Du et al., 2019) and (Lin et al., 2019), because",
"a) they all employed sequential labeling methods, leading to different evaluation dimensions from ours (theirs are more strict as they must give the exact symptom positions in the original utterance), and",
"b) their approaches were customized for sequential labeling paradigm, thus cannot be re-implemented in our dataset.",
"We use pretrained 300-dimensional Skip-Gram (Mikolov et al., 2013) embeddings to represent chinese characters.",
"We use Adam (Kingma and Ba, 2015) optimizer.",
"The size of the hidden states of both feed-forward network and Bi-LSTM is 400.",
"We apply dropout (Srivastava et al., 2014) with 0.2 drop rate to the output of each module and the hidden states of feed-forward network for regularization.",
"We adopt early stopping using the F1 score on the development set.",
"1) Plain-Classifier.",
"We develop a basic classifier model that uses the simplest strategy to accomplish the task.",
"The input of the model are the utterances in the window.",
"We concatenate all the utterances to obtain a long sequence, and encode it using a Bi-LSTM encoder, then we use self-attention to represent it as a single vector.",
"Next, the vector is fed into a feed-forward classifier network.",
"The output labels of the classifier consist of all the possible candidates.",
"The encoder adopts category-specific parameters.",
"2) MIE-Classifier.",
"To develop a more competitive model, we reuse MIE model architecture to implement an advanced classifier model.",
"The difference between the classifier model and MIE is the way of obtaining q c and q s .",
"Instead of matching, the classifier model treats c uttc and c utts directly as q c and q s respectively.",
"Thanks to the attention mechanism in the encoder, the classifier model can also capture the category-item pair information and the status information to some extent.",
"To further examine the effect of turn-interaction, we develop two classifiers as we do in MIE.",
"MIE-Classifier-single treats each utterance independently, and the probability score of each utterance is calculated.",
"The model uses a max-pooling operation to get the final score.",
"MIE-Classifier-multi considers the turn-interaction as MIE-multi does.",
"The experimental results are shown in Table 4. From the results, we can obtain the following observations.",
"1) MIE-multi achieves the best F-score on both window-level and dialogue-level full evaluation metric, as we expected.",
"The F-score reaches 66.40 and 69.28, which are considerable results in such sophisticated medical dialogues.",
"2) Both of the models using multi-turn interactions perform better than models solely using single utterance information, which further indicates the relations between turns play an important role in dialogues.",
"The proposed approach can capture the interaction.",
"As a proof, MIE-multi achieves a 2.01% F-score improvement in dialogue-level full evaluation.",
"3) Matching-based methods surpass classifier models in full evaluation.",
"We think the results are rational because matching-based methods can introduce candidate representation.",
"This also motivates us to leverage more background knowledge in the future.",
"Note that in category and item metrics, MIE-classifiers are better at times, but they fail to correctly predict the status information.",
"4) Both MIE models and MIE-classifier models overwhelm Plain-Classifier model, which indicates the MIE architecture is far more effective than the basic LSTM representation concatenating method.",
"5) Dialogue-level performance is not always better than window-level performance in full evalua-Window-level Dialogue-level Model Category Item Full Category Item Full P R F1 P R F1 P R F1 P R F1 P R F1 P R F1 Plain-Classifier 67.21 63.78 64.92 60.89 49.20 53.81 53.13 49.46 50.69 93.57 89.49 90.96 83.42 73.76 77.29 61.34 52.65 56.08 MIE-Classifier-single 80.51 76.39 77.53 76.58 64.63 68.30 68.20 61.60 62.87 97.14 91.82 93.23 91.77 75.36 80.96 71.87 56.67 61.78 MIE-Classifier-multi 80.72 77.76 78.33 76.84 68.07 70.35 67.87 64.71 64.57 96.61 92.86 93.45 90.68 82.41 84.65 68.86 62.50 63.99 MIE-single 78.62 73.55 74.92 76.67 65.51 68.88 69.40 64.47 65.18 96.93 90.16 92.01 94.27 79.81 84.72 75.37 63.17 67.27 MIE-multi 80.42 76.23 77.77 77.21 66.04 69.75 70.24 64.96 66.40 98.86 91.52 92.69 95.31 82.53 86.83 76.83 64.07 69.28 Table 4: The experimental results of MIE and other baseline models.",
"tion.",
"In our experiment, the classifier-based models perform better in window-level than dialogue-level in full evaluation.",
"The possible reason is error accumulation.",
"When the model predicts results the current window does not support, the errors will be accumulated with the processing of the next window, which will decrease the performance.",
"To further analyze the behavior of MIE-multi, we print the confusion matrix of category-item predictions, as shown in Figure 3. We denote the matrix as A , A [ i ][ j ] means the frequency of the circumstance that the true label is i while MIE-multi gives the answer j .",
"We study the matrix and find that MIE-multi failed to predict Symptom:Limited mobility , Symptom:Nausea , Symptom: Cardiomyopathy , and Test: Renal function test , which are emphasized by orange blocks ( A [ i ][ i ] = 0 ) in Figure 3. The Patient: I have atrial fibrillation, heart failure, anemia and loss my appetite.",
"possible reason is that they rarely appear in the training set, with frequency of 0.63%, 2.63%, 2.38% and 1.25%, respectively.",
"The results reveal that the data sparse and uneven problems are the bottlenecks of our approach.",
"In this part, we will analyze some cases to verify the effectiveness of the model with best performance, e.g. MIE-multi.",
"Particularly, we investigate an example shown in Figure 4. To determine whether the candidate Symptom:Coronary heart disease (patient-negative) is mentioned in the window, we should focus on the interaction between the adjacent pair located in the last of the window.",
"This adjacent pair is a question-answer pair, the category-item pair information is in the question of the doctor while the status information is in the answer of the patient.",
"In this case, MIE-Patient: What is the effect of sinus arrhythmia?",
"For better understanding, we utilize visualization for matching module and aggregate module.",
"Figure",
"4(a) is the attention heat map when the category-item pair information vector c canc matches the utterances category representations H uttc .",
"We can observe that the attention values of the mention of coronary heart disease are relatively high, which illustrates that the model can capture the correct category-item pair information in the window.",
"Figure",
"4(b) is the attention heat map when the status information c cans matches the utterances status representation H utts .",
"The attention values of the expressions related to status such as Yes and No are high, and the expression No is even higher.",
"So MIE-multi can also capture the status information in the window.",
"We also visualize the interaction between the fourth utterance and the other utterances.",
"In Figure",
"4(c), the score of the fifth utterance is the highest, which is in line with the fact that the fifth utterance is the most relevant utterance in the window.",
"In this way the model successfully obtains the related status information for the category-item pair information in the window.",
"We demostrate a case in Figure 5 that can explicitly show the need for turn interaction, where MIE-multi shows its advancement.",
"In this case, the label Symptom:Sinus arrhythmia (patient-positive) requires turn interaction information.",
"Specifically, in the third utterance, the patient omits the reason that makes him sick.",
"However, under the complete context, we can infer the reason is the sinus arrhythmia, since the patient consulted the doctor at the beginning of the window.",
"The model need to consider the interaction between different utterances to get the conclusion.",
"Interaction-agnostic model like MIE-single makes prediction on single utterance, and then sums them up to get the final conclusion.",
"Consequently, it fails to handle the case when the expressions of category-item and status are separated in different utterances.",
"As a result, MIE-single only obtains the category-item information Symptom:Sinus arrhythmia , but the status prediction is incorrect.",
"In contrast, MIE-multi is able to capture the interaction between different utterances and predicts the label successfully.",
"In this paper, we first describe a new constructed corpus for the medical information extraction task, including the annotation methods and the evaluation metrics.",
"Then we propose MIE, a deep neural matching model tailored for the task.",
"MIE is able to capture the interaction information between the dialogue turns.",
"To show the advantage of MIE, we develop several competitive baselines for comparison.",
"The experimental results indicate that MIE is a promising solution for medical information extraction towards medical dialogues.",
"In the future, we should further leverage the internal relations in the candidate end, and try to introduce rich medical background knowledge into our work.",
"This work is supported by the National Natural Science Foundation of China (No.61533018, No.61922085, No.61906196) and the Key Research Program of the Chinese Academy of Sciences (Grant NO. ZDBS-SSW-JSC006).",
"This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301), the Open Project of Beijing Key Laboratory of Mental Dis-roders (2019JSJB06) and the independent research project of National Laboratory of Pattern Recognition."
] | [
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"While a great deal of work has been done on NLP approaches to lexical semantic change detection, other aspects of language change have received less attention from the NLP community.",
"In this paper, we address the detection of sound change through historical spelling.",
"We propose that a sound change can be captured by comparing the relative distance through time between the distributions of the characters involved before and after the change has taken place.",
"We model these distributions using PPMI character embeddings.",
"We verify this hypothesis in synthetic data and then test the method's ability to trace the well-known historical change of lenition of plosives in Danish historical sources.",
"We show that the models are able to identify several of the changes under consideration and to uncover meaningful contexts in which they appeared.",
"The methodology has the potential to contribute to the study of open questions such as the relative chronology of sound shifts and their geographical distribution.",
"The study of sound change goes back to the beginnings of modern linguistics in early nineteenth century, when scholars such as Rask and Grimm started making hypotheses about the way sound changes over time, which in turn lead to the discovery of regular sound correspondences between ancient languages and the identification of cognates in modern ones (Murray, 2015).",
"Since spoken language from the past is not available, sound change in ancient languages must be deduced from written records by considering development in spelling through time.",
"In addition, while we may be able to see from the written records that a change did occur, less is known on the specific dynamics of the change.",
"Details of these dynamics include knowledge of when the change started to appear, how long it took for it to be complete, what was the relative chronology of individual sounds in a larger shift, what was the geographical distribution of a change and so forth.",
"Due to the sparsity of linguistic evidence, detailed empirical studies of chronological sound change are difficult to conduct.",
"This is especially the case for older stages of languages, where little written text was produced, and much of what did exist has been lost in transmission.",
"However, as we move forward in history to the rise of bureaucracy, for example in medieval Europe, we see that an extensive amount of written records were made.",
"Text from this period of time is interesting in the context of a study of sound change because it shows great variability in spelling patterns.",
"With the digitalization of such archives 1 , therefore, new opportunities arise to apply computational methods to the study of sound change through written text.",
"Considerable effort has already been devoted to the development of computational approaches aimed at discovering lexical semantic change (LSC) in historical corpora (Tahmasebi et al., 2018).",
"However, change related to phonology, morphology and syntax has remained out of the spotlight in NLP research.",
"In this study, we seek to bridge this gap as regards phonology.",
"Inspired by the work on LSC, we propose a method whereby sound change is traced via spelling change in historical text and modeled by training diachronic character embeddings over text from different time periods.",
"We start by reviewing previous approaches to the automatic detection of semantic shifts and spelling modification due to sound change.",
"Then we formulate our hypothesis that a sound change can be traced using diachronic distributional embeddings.",
"While sound change is not completely analogous to word meaning change, we argue that similar meth-1 A list of available resources for different languages is provided in the Guide to Medieval Manuscript Research from the University of Chicago Library: https://guides.lib.",
"ods can be used for both.",
"To verify our hypothesis, we conduct three studies on simulated sound change.",
"First, we test the methods on the phonological environment of a simple artificial language.",
"Then, we apply the same methods to a more complex scenario created by simulating sound change in a corpus of synchronic Danish text.",
"Having established the suitability of the methods on these two datasets, we finally experiment with tracing a well-known sound change in real historical language data, again in Danish.",
"The application of NLP methods to automatic LSC detection is already a rather well-developed sub-field of NLP research (Tahmasebi et al., 2018; Kutuzov et al., 2018).",
"In particular, the emergence of word embeddings as a viable way to model the distributional hypothesis in semantics (Firth, 1957) has paved the way for an application of word embeddings to LSC modeling (Kim et al., 2014; Hamilton et al., 2016b; Eger and Mehler, 2016; Yao et al., 2018).",
"Synchronically, the meaning of a word is characterized by word embeddings in terms of the contexts it appears in.",
"LSC is captured by training word embeddings at different time points and comparing these distributions typically using cosine distance.",
"Recently, contextualized word embeddings have also been applied to the problem.",
"While such models have the capability to capture change in distinct word usages, preliminary results suggest that traditional word embeddings are su-perior to the task of semantic change detection (Schlechtweg et al., 2020; Montariol et al., 2021).",
"One of the main issues in this comparison is the temporal alignment of dense embedding spaces.",
"For example in the case of neural models, embeddings are initialized and trained stochastically, which means that separate runs on even the same data will yield different embedding spaces.",
"Thus, work has focused on the development of methods to perform alignments to make dense embedding spaces comparable across time (see Kutuzov et al. (2018) for an overview).",
"As an alternative to neural embeddings, scholars have also used purely count-based measures, which are naturally aligned across dimensions.",
"Normalization techniques are 2 https://github.com/syssel/ letters-from-the-past also applied, e.g. based on positive pointwise mutual information (PPMI) (Hamilton et al., 2016b; Yao et al., 2018).",
"Most studies of LSC do not rely on a control dataset against which to validate their conclusions.",
"In Dubossarsky et al. (2017), on the contrary, it is argued that any claims about putative laws of semantic change in diachronic corpora must be evaluated against a relevant control condition.",
"The authors propose a methodology in which a control condition is created artificially from the original diachronic text collection by reshuffling the data.",
"No systematic LSC is expected in the artificially developed control dataset.",
"The distributional hypothesis has also been proposed as an explanatory model within the domain of phonology suggesting that phonological classes are acquired through distributional information (Chomsky and Halle, 1968; Mielke, 2008).",
"Driven by this hypothesis, recent work has focused on testing how distributional properties can be learned by phoneme embeddings (see Mayer 2020 for an overview).",
"Silfverberg et al. (2018) investigated to what extent learned vector representations of phonemes align with their respective representations in a feature space in which dimensions are articulatory descriptors (e.g., plosive).",
"Recently, Mayer (2020) has shown that phonological classes, such as long and short vowels, can be deduced from phoneme embeddings normalized using PPMI by iteratively performing PCA on candidate classes.",
"Thus, while the distributional hypothesis for phonology is well-established, one notable issue is the fact that the empirical evidence to study sound change is relatively inaccessible since it requires recorded speech or phonologically transcribed data.",
"Simulation is therefore used as a tool for studying the underlying mechanisms of sound change by creating computational models based on linguistic theory (Wedel, 2015).",
"Through simulation, questions pertaining to e.g., what factors influence the (in)stability of vowel systems across generations (de Boer, 2003) can be modeled by controlling the assumptions made by the model.",
"Work on simulation ranges from implementing theoretical approaches using mathematical models (Pierre-humbert, 2001; Blythe and Croft, 2012) to iterated learning and neural networks (Hare and Elman, 1995; Begu, 2021).",
"level, they are primarily theoretically driven.",
"In this paper, we wish to take a data-driven approach and utilize some of the methods reviewed above to track historical sound change in writing.",
"Rather than using word embeddings as done to model lexical change, we will use character embeddings, that are better suited to the task of sound change modeling.",
"Within the field of LSC detection, change in word semantics is traditionally measured by computing pairwise similarity (Hamilton et al., 2016b) over a time series, ( t , ..., t + ), in which a shift in the meaning of a word, w i , can be measured by its relative distance to another word, w j .",
"In this way, hypotheses about specific shifts may be tested.",
"Another measure is semantic displacement , in which semantic change for a given word is quantified by measuring its temporal displacement.",
"For both measures, looking at consecutive time steps provides a measure to the rate of change of a word in relation to another word, or independently.",
"While LSC is about meaning shifts of unchanged word forms, sound change is a change of form, i.e., a given phoneme changes to another one within certain contexts.",
"We denote such a change a b / c , where c' stands for a given context.",
"While changes of either a or b will be reflected in changes to their individual distributions ( displacement ), looking at them independently of one another will not tell us whether one of the phonemes is becoming similar to the other.",
"Therefore, we suggest to look at the pairwise similarity between a and b .",
"More specifically, given a time series ( t 1 , ..., t n ), in which t 1 denotes a time before a sound change was in effect and t n denotes a time where a sound change is completed, we expect b i to move towards a 1 as i n , in other words to become similar to a 1 , since it will begin to appear in the same contexts.",
"As was noted earlier, sound is not accessible in historical text, to which we would like to be able to apply our methodology.",
"In historical text preceding spelling conventions, sound is assumed to be reflected in spelling.",
"While detailed philological and linguistics analyses of written language can help to determine if a distinct spelling corresponds to a particular phoneme, or whether that spelling is rather a reflection of synchronic spelling variation (Minkova, 2015), resources including such analyses are scarce.",
"Thus, we chose to use characters as a proxy for sound, and model sound change through changes in the distance between pairs of character distributions.",
"In addition, before assuming that an observed decrease in the distance between two such distributions reflects a real change, we also want to see that the same decrease is not visible in a control corpus in which no such change has indeed taken place.",
"In order to verify the hypothesis that sound change can be traced using distributional information with the methodology proposed above, we test whether we are able to trace simulated change in synthetic data.",
"As a first synthetic setting, we restrict ourselves to track change in a synthetic language with simple phonotactics.",
"In this way, we get a sense of whether the proposed hypothesis works under perfect conditions, i.e., one in which characters correspond with phonemes one-to-one.",
"In the second synthetic setting, we seek to imitate the condition of tracing change in an orthographic setting by simulating change in a corpus of synchronic text in which character distributions interact with the noise added by spelling and lexicon.",
"In both synthetic settings, we compare the simulated change to a control setting where no change has occurred.",
"Finally, we will test the hypothesis on real data.",
"Our goal is to trace the lenition after vowels of voiceless plosives, p t k , to their voiced counterparts, b d g , in historical Danish.",
"While this change is believed to have initiated around the beginning of the 14 th century, details about the relative chronology of the series and geographical distribution of the change are difficult to account for (Frederiksen, 2018).",
"Therefore, in an attempt to discover interesting patterns of this change, we train character embeddings on historical sources from the periods following the time when the change is believed to have started.",
"As we did for the synthetic data, and again following Dubossarsky et al. (2017), we also introduce a control setting to test the significance of the observed changes.",
"Parupa is an artificial language introduced by Mayer (2020).",
"It is characterized by a small phonological inventory 3 , and simple phonotactic rules for how sounds combine: only CV syllables are allowed 3 C : / p t k b d g r / V : / i e u o a / 6715 / p t k / occur before high vowels, / i u / / b d g / occur before non-high vowels, / e o / only / b p / occur word-initially / r / occurs before all vowels all consonants can occur before / a / We created five corpora of Parupa consisting of 20,000 words each using the Hidden Markov Model provided by Mayer (2020) 4 : While the first corpus, parupa 1 , preserves the phonotactic rules listed above, the remaining four include a sound change, p b /_ u, i 5 , which happens gradually (linearly) and is fully completed in parupa 5 .",
"Additionally, we created five control corpora (one for each of the target ones and with the same vocabulary) which do not include any simulated sound change.",
"Each of the corpora consists of 50 , 000 words.",
"The Danish UD treebank To collect a corpus of synchronic language, we extracted the training sentences from the Danish UD treebank (Johannsen et al., 2015).",
"From this collection of sentences, we extracted five sub-corpora ( UD-Danish 1-5 ) consisting of 16,000 words each, in which we simulated a sound change, g k / V_{V # t#} 6 .",
"As done in the case of Parupa, the sound change was simulated gradually, with linear increase in change probabilities (i.e., 0 , 0 . 25 , 0 . 50 , 0 . 75 , 1 ).",
"To create the control condition, we also kept a version of the sub-corpora where no change was simulated.",
"The five control versions are thus identical to the five target corpora in terms of vocabulary and distributions, except for the simulated change.",
"Historical spellings of geographical names Danmarks Stednavne is a on-going lexicographic book series creating a register of geographical names in Denmark.",
"The register also serves as a philological resource by listing attestations of the names coming from various historical resources.",
"distributional_learning 5 The underscore indicates the position of the changing element, i.e., p changes into b when preceding u or i .",
"This notation using an underscore to indicate position of the changing element will be used throughout the rest of the paper.",
"6 i.e., g between vowels, word-final after vowel, or after vowel preceding word-final t .",
"For example, the entry for Copenhagen includes over 700 historical attestations listed by date 7 .",
"In addition to the printed volumes (Danmarks Stednavne, 19222013), geographical names and their connected metadata (e.g., geographical location and historical attestations) have been digitized, and can be found in an online edition 8 which comprizes over 210 , 000 names and 900 , 000 historical attestations.",
"To study the lenition of / p t k /, we extracted historical attestations of names ranging from the 12 th to the 18 th century.",
"Using the attestation before the 14 th century as a reference to the time before the change was initiated ( t 1 ), we divided the list of names into bins of half a century to track the development of character embeddings through time.",
"The choice of bin size is an important methodological consideration when tracing language change (Kutu-zov et al., 2018).",
"From a philological perspective, 50 years correspond to two generations of writers (spellers'), which is considered a realistic bin size to track development of spelling in writing.",
"This provides us with eleven sub-corpora with 31 , 000 ( 15 , 000 ) name tokens on average.",
"In order to create a control setting, we generated a corresponding number of sub-corpora by stratifying the names with respect to their date of attestation, corresponding to the shuffle' approach suggested by Dubossarsky et al. (2017).",
"In this approach, no diachronic change is expected to be observed, as attestations are distributed evenly across bins based on their original date of occurrence.",
"To represent characters in a distributional space, we use PPMI embeddings.",
"Contrary to dense embeddings, these are easy to interpret and when compared across different initializations, they are naturally aligned, so we do not introduce noise caused by the alignment process.",
"Using the implementation by Mayer (2020), the sliding window is directional, and thus we distinguish contexts preceding and following the target character.",
"While this directionality is neglected when creating PPMI word embeddings, the direction matters when using character embeddings to test the intuition behind the distributional hypothesis, in which direction in a context is meaningful.",
"conditioning of the change aimed to be captured: For Parupa, the simulated change is conditioned on only one character, and thus for this experiment we applied bigrams.",
"For UDDanish, we applied trigams as the change is conditioned by two characters (the preceding and succeeding).",
"For the tracking of lenition in Danish, the condition of the change is expected to be similar to the one we simulated in the synthetic setting of UDDanish.",
"However, to ensure we capture enough context, in this case we expand the model to using 4-grams.",
"We measure sound change in terms of a decrease in the distance between two character distributions over time.",
"In other words, given two character distributions A and B corresponding to any two phonemes / a / and / b /, we should see that distance ( A (1) , B ( n ) ) gets smaller for greater values of n if there is a change A B .",
"While most studies use cosine distance to measure the difference between distributions (Kutuzov et al., 2018), we chose to use Euclidean distance as it directly reflects our hypothesis by taking the sum of differences in each dimension (context).",
"For each of the corpora being investigated, we use the R software (R Core Team, 2021) and the effects' package (Fox and Weisberg, 2019) to build linear regression models that predict the distributional distance between two sounds per temporal interval in the target and the control versions of the corpus.",
"The advantage of employing linear regression in this case is that we can test the effect of multiple factors as well as their interaction.",
"In our case, the distance between the two sounds being investigated is the dependent variable, and we want to predict the main effects of temporal interval and corpus as well as the interaction between them.",
"To argue that there has been a sound change across time, there must be a significant effect of temporal interval on distance.",
"In addition, we would like to see an interaction between this effect and the effect of the corpus variable in that the change should be absent, or at least significantly smaller, in the control corpus.",
"Table 1 shows the results of the linear regression models we developed to test whether any evidence of sound change discovered in the target corpora, where sound change is either simulated or histori-3",
"The intercept' estimate corresponds to the distance predicted between the two sounds being investigated in the initial temporal interval.",
"The Bin' estimate shows by how much the distance is expected to change for every temporal interval.",
"A negative effect means that the distance between the two sounds is becoming smaller.",
"The Con-trol' effect shows the predicted change to the initial Intercept in the control corpus (this corresponds to the effect of the corpus variable), and finally Bin:Control' shows the interaction between temporal bin and corpus type.",
"In both corpora where change is simulated, there is a significant effect of temporal interval.",
"This is expected given the fact that gradual change has been induced in the data.",
"For both corpora, the effect of the control corpus on the initial sound distance is not significant.",
"Importantly, the interaction between the effect of the temporal bin and the control corpus is significant in both cases.",
"The 6717 Effect Estimate Std.",
"interaction supports the hypothesis that we see a pattern of change in the simulated corpora that is significantly different compared to the control data.",
"The interactions are shown in the plots in Figure 1.",
"Turning to the results for the Danish Geographical Names corpus, while the models show significant effects of Bin, Control and interaction between the two for the k g and the t d changes, no significant effects are found for the p b change.",
"When we look at the corresponding interaction plots in Figure 2, we see that the distance between p and b in the corpus decreases in the third bin to then increase and finally slightly decrease again in a non-linear way.",
"The changes displayed in the plots in",
"(b) and",
"(c), on the contrary, follow the expected trend: The observed consonant is moving towards its voiced version in the real corpus but not in the control.",
"The results from the two simulation studies suggest that sound change can be traced with our proposed methodology of measuring the distance between pairs of character distributions over time.",
"We showed this both in a simplified setting (Parupa), and in the orthographically noisy environment provided by synchronic Danish data (UD Danish).",
"The main assumption in these simulation studies was that change could be modeled linearly.",
"However, as discussed by scholars, change is often not linear, and can follow an s-shaped curve through a community (Denison, 2003).",
"In a study of semantic lexical change based on synthetic data, Shoe-mark et al. (2019) experiment with the injection of changes the probabilities of which vary linearly or logarithmically, and find that regression in general provides reasonable results in spite of being sensitive to outliers and of producing a certain amount of false positive results.",
"It also performs better than a non-parametric measure like Kendall's .",
"The results obtained in our study seem to confirm the usefulness of linear models to detect sound change even though one of the cases of lenition targeted in the Danish Geographical Names corpus could not be modelled.",
"Focusing on our results on the tracing of lenition, then, we were able to identify a change from /t k/ /d g/ .",
"However, an important thing to note in regards to the control setting for these results is how it diverges from the synthetic settings, which we initially used as a verification of the proposed hypothesis to trace sound change.",
"There, the 6718 102 103 104 105 Interaction of Bin and Corpus on Distance in Geo: p => b D i s t an c e 1300 1300 1350 1350 1400 1400 1450 1450 1500 1500 1550 1550 1600 1600 1650 1650 1700 1700 1750 1750 1800 Corpus ChangeControl",
"variation in vocabulary was the same in the simulated and the control settings.",
"In this case, however, vocabulary variation is lower in our control setting due to the shuffling of the name attestations.",
"As a consequence, the control setting does not properly test the possible confounding effect of vocabulary within the proposed methodology.",
"Therefore, we proceeded to evaluate what types of contexts the model picked up.",
"To get a sense of this, instead of looking at the euclidean distance for the full embedding, we ran linear regression on the target data looking at differences between character distributions for each dimension.",
"We then extracted the patterns corresponding to the dimensions showing significant differences and considered those with the highest Pearson's r coefficient (Tables 2-4).",
"Starting with the resulting patterns for Parupa and UD Danish, in both cases we are able to identify the exact contexts where the change was simulated: In Parupa before i / u and in the UD Danish corpus, between vowels and in the frequent suffix ig(t) (although the end-of-word is not captured due to n-gram size restrictions).",
"For Parupa, it is worth noting how the model captures patterns after vowel as well.",
"This position is only implicitly involved in the conditioning of the simulated change, and the 6719 4-gram Slope Pearson's r rvi_ -0.49 -0.85 _er -0.42 -0.78 sii_ -0.40 -0.71 m#a_ -0.40 -0.81 oli_ -0.39 -0.84 an_h -0.32 -0.80 ara_ -0.31 -0.62 n_ga -0.29 -0.82 vi_# -0.29 -0.70 is_a -0.29 -0.73 Table 4: Analysis of the change from k to g in historical records of geographical names.",
"slope correspondingly less steep.",
"Moving on to the tracing of sound change in real data, we focus our analysis on k g , which showed the greatest change.",
"Considering the patterns, rvi_ and vi_# , these are connected to the the word vig inlet', commonly used as a suffix in the formation of geographical names in Danish.",
"Descending from a Proto-Germanic word with final k ( wkwan to give way; to turn (away)', compare German weichen id.' and Dutch wijken id.' (Kroonen, 2013)), the suffix is in early sources attested with a k : For example, out of the six written sources of the geographical name Rrvig before the 14 th century (corresponding to bin 1-3 in our study), four were written with a k , while in later sources forms with g became predominant, with the latest attestation of k appearing in 1465.",
"Many of the patterns can be attributed to spellings related to similar changes 9,10 .",
"However, in the case of n_ga , is_a and an_h these are not immediately interpretable.",
"In the case of oli_ , this pattern is most likely related to the word bolig home;dwelling'.",
"This word, however, does not have a comparable ancestor with k , and the change has to be explained as reflecting later innovation, namely beginning trend of using bolig in name formations among younger attestations.",
"This latter example is related to an important issue in language evolution: When language changes 9 Danish sig bog; mire' from Old Danish sik , compare Norwegian and Swedish (dialectal) sik (Danmarks Stednavne, 19222013) 10 Danish ager field' from Proto-Germanic akra , compare English acre and Swedish ker (Kroonen, 2013).",
"through generations, we also observe shifts in culture.",
"Different types of data drift' are in fact discussed by Hamilton et al. (2016a) in the context of LSC.",
"The authors suggest that they may be modeled independently of each other by means of different measures of change.",
"The effect of cultural change has yet to be discussed for sound change.",
"However, it is an important discussion, since phonology, when looking at it from a corpus-based perspective, is not only governed by phonotactic constraints, but also a by-product of word usage, which is in turn dependent on cultural patterns.",
"In this respect, another important point to note about the retrieved patterns both from the simulation of UD Danish and the tracing of k g is that many of them reflect derivational or inflec-tional suffixes, and are thus characterized by high frequency of occurrence across word forms.",
"While the observation that frequent patterns are more easily captured may seem trivial, lack of suffi-cient evidence may nevertheless be the reason why we cannot model the p b change.",
"Germanic p descends from Proto-Indo-European (PIE) * b , which, however, has a special place in the PIE phoneme inventory and is considered a sort of black sheep that some scholars do not believe to have existed due to its few attestations.",
"In fact, the attestations of Germanic p most often come from loan words and are not seen in morphemes.",
"Thus the evidence for p b is inherently scarcer than for the other two consonant pairs we have investigated.",
"Further investigation of this sound change could be carried out by means of additional simulations, or more detailed analysis of the obtained character distribution, and is left for the future.",
"A final observation on the identified patterns is that the model fails to generalize across synchronic variation in spellings.",
"For example, we see that a spelling with ii is treated alongside spelling with a single i .",
"While this type of variation could to some extent be accounted for by treating it as an independent variable, such a solution would have consequences for our experiment design in that we use PPMI weighting on raw n-gram counts.",
"This method enabled us to interpret the exact inner workings of the model and find the contexts in which a change has happened.",
"If we had used neural models for example, in which characters are represented by dense embeddings, similar characters would have shared similar representations, thereby 6720 perhaps allowing the model to generalise e.g., to sound change occurring after a vowel .",
"In this study, we wanted to privilege explainability, but dense representations should be explored in the future.",
"In this paper we presented a novel method for the modeling of sound change through the use of diachronic character embeddings.",
"Sound change is modeled in terms of increasing similarity between character distributions across time intervals.",
"The proposed method was tested on synthetic data with promising results, and then applied to a real world scenario with the goal of tracing the lenition of / p t k / / b d g / in Danish by looking at spelling in historical sources.",
"The method was able to detect the changes for two of the sound pairs, and also to point at specific contexts of occurrence that influenced the changes.",
"However, our evaluation showed that the proposed models were sensitive to variation relating to vocabulary.",
"To what extent such variation is responsible for the occurrence of false positives has yet to be investigated.",
"For scholars interested in sound change, there are a number of important open questions, such as the relative chronology and geographical distribution of sound shifts.",
"Although we have not addressed these questions here, we believe our methodology can be further developed in ways that would allow to do so, e.g., by adding geographical location as an additional factor in the models.",
"Both issues would constitute interesting avenues for future research.",
"In this paper we have used purely count-based methods.",
"While this approach enables us to directly interpret the results of the models, it also suffers from its inability to generalise across contexts.",
"This drawback motivates experimenting with neural methods that make use of dense character representations, to test whether they can make similar generalisations as done by historical linguists, particularly as regards infrequent patterns that could be captured across word forms.",
"We would like to thank Bo Nissen Knudsen and David Caspersen Yousif for helping to prepare and making the data from Danmarks Stednavne available for the study.",
"Also, a great thanks to the anonymous reviewers for their helpful comments and suggestions.",
"and Text in Time and Space , a core group project funded by the Velux Foundations."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"other",
"abstain",
"result",
"objective",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"result",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Hedges play an important role in the management of conversational interaction.",
"In peer-tutoring, they are notably used by tutors in dyads (pairs of interlocutors) experiencing low rapport to tone down the impact of instructions and negative feedback.",
"Pursuing the objective of building a tutoring agent that manages rapport with students in order to improve learning, we used a multimodal peer-tutoring dataset to construct a computational framework for identifying hedges.",
"We compared approaches relying on pre-trained resources with others that integrate insights from the social science literature.",
"Our best performance involved a hybrid approach that outperforms the existing baseline while being easier to interpret.",
"We employ a model explainability tool to explore the features that characterize hedges in peer-tutoring conversations, and we identify some novel features, and the benefits of such a hybrid model approach.",
"Rapport, most simply defined as the . . . relative harmony and smoothness of relations between people . . .",
"(Spencer-Oatey, 2005), has been shown to play a role in the success of activities as varied as psychotherapy",
"(Leach, 2005)",
"and survey interviewing",
"(Lune and Berg, 2017).",
"In peer-tutoring, rapport, as measured by the annotation of thin slices of video, has been shown to be beneficial for learning outcomes",
"(Zhao et al., 2014; Sinha and Cassell, 2015).",
"The level of rapport rises and falls with conversational strategies deployed by tutors and tutees at appropriate times, and as a function of the content of prior turns.",
"These strategies include self-disclosure, referring to shared experience, and, on the part of tutors, giving instructions in an indirect manner.",
"Some work has attempted to automatically detect these strategies in the service of intelligent tutors",
"(Zhao et al., 2016a), but only a few strategies have been attempted.",
"Other work has concentrated on a \"social reasoning module\"",
"(Romero et al., 2017)",
"to decide which strategies should be generated in a given context, but indirectness was not among the strategies targeted.",
"In this paper, we focus on the automatic classification of one specific strategy that is particularly important for the tutoring domain, and therefore important for intelligent tutors: hedging, a sub-part of indirectness that \"softens\" what we say.",
"This work is part of a larger research program with the long-term goal of automatically generating indirectness behaviors for a tutoring agent.",
"According to Brown and Levinson",
"(1987), hedges are part of the linguistic tools that interlocutors use to produce politeness, by limiting the face threat to the interlocutor",
"(basically by limiting the extent to which the interlocutor might experience embarrassment because of some kind of poor per-formance).",
"An example is \"that's kind of a wrong answer\".",
"Hedges are also found when speakers wish to avoid losing face themselves, for example when saying",
"(\" I think I might have to add 6.\").",
"Madaio et al.",
"(2017)",
"found that in a peer-tutoring task, when rapport between interlocutors is low, tutees attempted more problems and correctly solved more problems when their tutors hedged instruc-2160 tions, which likewise points towards a \"mitigation of face threat\" function.",
"Hedges can also be associated with a nonverbal component, for example averted eye gaze during criticism",
"(Burgoon and Koper, 1984).",
"Hedges are not, however, always appropriate, as in \"I kind of think it's raining today.\" when the interlocutors can both see rain",
"(although it might be taken as humorous).",
"These facts about hedges motivate a way to automatically detect them and, ultimately",
"(although not in the current work)",
"also generate them.",
"In both cases we first have to be able to characterize them using interpretable linguistic features, which is what we address in the current paper.",
"Thus, in the work described here, based on linguistic descriptions of hedges",
"(Brown and Levinson, 1987; Fraser, 2010), we built a rule-based classifier.",
"We show that this classifier in combination with additional multimodal interpretable context-dependent features significantly improves the performance of a machine learning model for hedges, compared to a less interpretable deep learning baseline from Goel et al.",
"(2019)",
"using word embeddings.",
"We also relied on a machine learning model explanation tool",
"(Lundberg and Lee, 2017)",
"to investigate the linguistic features related to hedges in the context of peer-tutoring, primarily to see if we could discover surprising features that the classification model would associate to hedges in this context, and we describe those below.",
"The code of the models described in the paper is also provided.",
"1 2 Related work Hedges: According to Fraser",
"(2010), hedging is a rhetorical strategy that attenuates the strength of a statement.",
"One way to produce a hedge is by altering the full semantic value of a particular expression through Propositional hedges",
"(also called Approximators in Prince et al.",
"(1982)), as in \"You are kind of wrong,\" that reduce prototypical-ity",
"(i.e accuracy of the correspondence between the proposition and the reality that the speaker seeks to describe).",
"Propositional hedges are related to fuzzy language",
"(Lakoff, 1975), and therefore to the production of vagueness",
"(Williamson, 2002)",
"and uncertainty",
"(Vincze, 2014).",
"A second kind are Relational Hedges",
"(also called Shields in Prince et al.",
"(1982)), such as I think that you are wrong. or The doctor wants you to stop smoking., conveying that the proposition is 1 https://github.com/AnonymousHedges/HedgeDetection considered by the speaker as subjective.",
"In a further sub-division, Attribution Shields , as in \"The doctor wants you ... \", the involvement of the speaker in the truth value of the proposition is not made explicit, which allows speakers not to take a stance.",
"As described above, Madaio et al.",
"(2017)",
"found that tutors who showed lower rapport with their tutees used more hedged instructions",
"(they also employed more positive feedback), however this was only the case for tutors with a greater belief in their ability to tutor.",
"Tutees in this context solved more problems correctly when their tutors hedged instructions.",
"No effect of hedging was found for dyads",
"(pairs of interlocutors)",
"with greater social closeness.",
"However, the authors did not look at the specific linguistic forms these teenagers used.",
"Rowland",
"(2007)",
"also describes the role that hedging plays in this age group, showing that students use both relational",
"(\" I think that John is smart.\")",
"and propositional",
"(\"John is kind of smart.\")",
"hedges for much the same shielding function of demonstrating uncertainty, to save them from the risk of embarrassment if they are wrong.",
"The author observed that teens used few Adaptors",
"( kind of , somewhat )",
"and preferred to use Rounders",
"( around , close to ).",
"However, this study was performed with an adult and two children, possibly biasing the results due to the participation of the adult investigator.",
"Hedges have been included in virtual tutoring agents before now.",
"(Howard et al., 2015)",
"integrated hedges in a tutor agent for undergraduates in CS, as a way to encourage the student to take the initiative.",
"Hedges have also been used as a way of integrating Brown and Levinson's politeness framework",
"(Wang et al., 2008; Schneider et al., 2015)",
"in virtual tutoring agents.",
"Results were not broken out by strategy, but politeness in general was shown to positively influence motivation and learning, in certain conditions.",
"Computational methods for hedge detection: A number of studies have targeted the detection of hedges and uncertainty in text",
"(Medlock and Briscoe, 2007; Ganter and Strube, 2009; Tang et al., 2010; Velldal, 2011; Szarvas et al., 2012), particularly following the CoNLL 2010 dataset release",
"(Farkas et al., 2010).",
"However, this work is not as related to hedges in conversation, as it focuses on a formal and academic language register",
"(Hy-land, 1998; Varttala, 1999).",
"As noted by Prokofieva and Hirschberg",
"(2014), the functions of hedges are domainand genre-dependent, therefore this bias 2161 towards formality implies that the existing work may not adapt well to the detection of hedges in conversation between teenagers.",
"A consequence is that the existing work does not consider terms like \"I think,\" since opinions rarely appear in an academic writing dataset.",
"Instructions are also almost absent",
"(\"I think you have to add ten to both sides.\"), a strong limitation for the study of conversational hedges since it is in requests",
"(including tutoring instructions)",
"that indirect formulations mostly occur according to Blum-Kulka",
"(1987).",
"Prokofieva and Hirschberg",
"(2014)",
"also note that it is difficult to detect hedges because the word patterns associated with them have other semantic and pragmatic functions: considering \"I think that you have to add x to both sides.\" vs \"I think that you are an idiot.\", it is not clear that the second use of \"I think that\" is an hedge marker.",
"They advocate using machine learning approaches to deal with the ambiguity of these markers.",
"Working on a conversational dataset, Ulinski et al.",
"(2018)",
"built a computational system to assess speaker commitment",
"(i.e. at which point the speaker seems convinced by the truth value of a statement), in particular by relying on a rule-based detection system for hedges.",
"Compared to that work, our rule-based classification model is directly detecting hedge classes, and we employ the predictions of the rule-based model as a feature for stronger machine learning models, designed to lessen the impact of the imbalance between classes.",
"We also consider apologies when they serve a mitigation function",
"(we then call them Apologizers ), as was done by the authors of our corpus, and we also use the term subjectivizers as defined below, to be able to compare directly with the previous work carried out on this corpus.",
"As far as we know, only Goel et al.",
"(2019)",
"have worked with a peer-tutoring dataset",
"(the same one that we also use), and they achieved their best classification result by employing an Attention-CNN model, inspired by Adel and Schtze",
"(2017).",
"We consider a set D of conversations D =",
"( c 1 , c 2 , ..., c | D | )",
", where each conversation is composed of a sequence of independent syntactic clauses c i =",
"( u 1 , u 2 , ..., u M )",
", where M is the number of clauses in the conversation.",
"Note that two consecutive clauses can be produced by the same speaker.",
"Each clause is associated with a unique label corresponding to the different hedge classes described in Table 1: y i C = { Propositional Hedges , Apologizers , Subjectivizers , Not hedged }.",
"Finally, an utterance u i can be represented as a vector of features X =",
"( x 1 , x 2 , ..., x N )",
", where N represents the number of features we used to describe a clause.",
"Our first goal is to design a model that correctly predicts the label y i associated to u i .",
"It can be understood as the following research question: RQ1: \"Which models and features can be used to automatically characterize hedges in a peer-tutoring interaction?\"",
"Our second goal is to identify, for each hedge class, the set of features F class = { f k } , k [1 , N ] sorted by feature importance in the classification of class .",
"It corresponds to the following research question: RQ2: \"What are the most important linguistic features that characterize our hedge classes in a peer-tutoring setting?\" 4 Methodology 4.1 Corpus Data collection: The dialogue corpus used here was collected as part of a larger study on the effects of rapport-building on reciprocal peer tutoring.",
"24 American teenagers",
"(mean age = 13.5, min = 12, max = 15), half male and half female, came to a lab where half of the participants were paired with a same-age, same-gender friend, and the other half paired with a stranger.",
"The participants were assigned to a total of 12 dyads in which the participants alternated tutoring one another in linear algebra equation solving for 5 weekly hour-long sessions, for a total corpus of nearly 60 hours of face-to-face interactions.",
"Each session was structured such that the students engaged in brief social chitchat in the beginning, then one of the students was randomly assigned to tutor the other for 20 minutes.",
"They then engaged in another social period, and concluded with a second tutoring period where the other student was assigned the role of tutor.",
"Audio and video data were recorded, transcribed, and segmented for clause-level dialogue annotation, providing nearly 24 000 clauses.",
"Non-speech segments",
"(notably fillers and laughter)",
"were maintained.",
"Because of temporal misalignment for parts of the corpus, many paraverbal phenomena, such as prosody, were unfortunately not available to us.",
"Since our access to the dataset is covered by a Non-Disclosure Agreement, it cannot be released 2162 publicly.",
"However the original experimenters' Institutional Review Board",
"(IRB)",
"approval allows us to view, annotate, and use the data to train models.",
"This also allows us to provide a link to a pixe-lated video example in the GitHub repository of the project 2 .",
"Data annotation: The dataset was previously annotated by Madaio et al.",
"(2017), following an annotation manual that used hedge classes derived from Rowland",
"(2007)",
"(see Table 1).",
"Only the task periods of the interactions were annotated.",
"Comparing the annotations with the classes mentioned in the related work section, Subjectivizers correspond to Relational hedges",
"(Fraser, 2010), Propositional hedges and Extenders correspond to Approximators",
"(Prince et al., 1982)",
"with the addition of some discourse markers such as just .",
"Apologizers are mentioned as linguistic tools related to negative politeness in Brown and Levinson",
"(1987).",
"Krippendorff's alpha obtained for this corpus annotated by four coders was over 0.7 for all classes",
"(denoting an acceptable inter-coder reliability according to Krippendorff",
"(2004)).",
"The dataset is widely im-balanced, with more than 90% of the utterances belonging to the Not hedged class.",
"In reviewing the corpus and the annotation manual, however, we noticed two issues.",
"First, the annotation of the Extenders class was inconsistent, leading to the Extenders and Propositional hedges classes carrying similar semantic functions.",
"We therefore merged the two classes and grouped utterances labeled as Extenders and those labeled as Propositional hedges under the heading of Propositional hedges .",
"Second, the annotation of clauses containing the tokens \"just\" and \"would\"",
"(two terms occurring frequently in the dataset that are key components of Propositional Hedges and Subjectivizers but that are not in fact hedges in all cases)",
"was also inconsistent, leading to virtually all clauses with those two tokens being considered hedges.",
"We therefore re-considered all the clauses associated with any of the hedge classes, as well as all the clauses in the \"Not hedged\" class that contained \"just\" or \"would\".",
"The re-annotation was carried out by two annotators who achieved a Krippendorff's alpha inter-rater reliability of .9 or better for Apologizers , Subjectivizers , and Propositional hedges before independently re-annotating the relevant clauses.",
"An example of a re-annotation was removing \"I would kill you!\" from the hedge 2 https://github.com/AnonymousHedges/HedgeDetection classes.",
"Label from rule-based classifier",
"(Label RB): We use the class label predicted by the rule-based classifier described in Section 4.3 as a feature.",
"Our hypothesis is that the machine learning model can use this information to counterbalance the class imbalance.",
"To take into account the fact that some rules are more efficient than others, we weighted the class label resulting from the rule-based model by the precision of the rule that generated it.",
"Unigram and bigram: We count the number of occurrences of unigrams and bigrams of the corpus in each clause.",
"We used the lemma of the words for unigrams and bigrams using the nltk lemmatizer",
"(Loper, 2002)",
"and selected unigrams and bigrams that occurred in the training dataset at least fifty times.",
"The goal was to investigate, with a bottom-up approach, to what extent the use of certain words characterizes hedge classes in tutoring.",
"In Section 5 we examine the overlap between these words and those a priori identified by the rules.",
"Part-of-speech",
"(POS): Hedge classes seem to be associated with different syntactic patterns: for example, subjectivizers most often contain a personal pronoun followed by a verb, as in \"I guess\", \"I believe\", \"I think\".",
"We therefore considered the number of occurrences of POS-Tag n-grams",
"(n=1, 2, 3)",
"as features.",
"We used the spaCy POS-tagger and considered POS unigrams, bigrams and trigrams that occur at least 10 times in the training dataset.",
"LIWC: Linguistic Inquiry and Word Count",
"(LIWC)",
"(Pennebaker et al., 2015)",
"is standard software for extracting the count of words belonging to specific psycho-social categories",
"( e.g. , emotions, religion).",
"It has been successfully used in the detection of conversational strategies",
"(Zhao et al., 2016a).",
"We therefore count the number of occurrences of all the 73 categories from LIWC.",
"Tutoring moves",
"(TM): Intelligent tutoring systems rely on specific tutoring moves to successfully convey content",
"(as do human tutors).",
"We therefore looked at the link between the tutoring moves, as annotated in Madaio et al.",
"(2017), and hedges.",
"For tutors, these moves are",
"(1)",
"instructional directives and suggestions,",
"(2)",
"feedback, and",
"(3)",
"affirmations, mostly explicit reflections on their partners'comprehension, while for tutees, they are",
"(1)",
"questions,",
"(2)",
"feedbacks, and",
"(3)",
"affirmations, 2163 Class Definition Example Subjectivizers Words that reduce intensity or certainty So then I would divide by two.",
"Prop.",
"hedges Apologizers Subjectivizers Not hedged Total 1210 128 626 21192 23156 Table 2: Distribution of the classes Features name Automatic extraction Vector size Rule-based label Yes 4 Unigram Yes ~250 Bigram Yes ~250 POS Yes ~1200 LIWC Yes 73 Nonverbal No 24 Tutoring moves No 6 Total ~1800 Table 3: List of automatically extracted and manually annotated features with their size.",
"Nonverbal and paraverbal behaviors: As in Goel et al.",
"(2019), we included the nonverbal and paraverbal behaviors that are related to hedges.",
"Specifically, we consider laughter and smiles, that have been shown to be effective methods of mitigation",
"(Warner-Garcia, 2014), cut-offs indicating self-repairs, fillers like \"Um\", gaze shifts",
"(annotated as 'Gaze at Partner', 'Gaze at the Math Worksheet', and 'Gaze elsewhere'), and head nods.",
"Each feature was present twice in the feature vector, one time for each interlocutor.",
"Inter-rater reliability for nonverbal behavior was 0.89",
"(as measured by Krippendorff's alpha)",
"for eye gaze, 0.75 for smile count, 0.64 for smile duration and 0.99 for head nod.",
"Laughter is also reported in the transcript at the word level.",
"We separate the tutor's behaviors from those of the tutee.",
"The collection process for these behaviors is detailed further in Zhao et al.",
"(2016b).",
"The clause-level feature vector was normalized by the length of the clause",
"(except for the rule-based label).",
"This length was also added as a feature.",
"Table 3 presents an overview of the final feature vector.",
"The classification models used are presented here according to their level of integration of external linguistic knowledge.",
"Rule-based model: On the basis of the annotation manual used to construct the dataset from Madaio et al.",
"(2017), and with descriptions of hedges from Rowland",
"(2007), Fraser",
"(2010)",
"and Brown and Levinson",
"(1987), we constructed a rule-based classifier that matches regular expressions indicative of hedges.",
"The rules are detailed in Table 7 in the Appendix.",
"LGBM: Since hedges are often characterized by explicit lexical markers, we tested the assumption that a machine learning model with a knowledge-driven representation for clauses could compete with a BERT model in performance, while being much more interpretable.",
"We relied on LightGBM, an ensemble of decision trees trained with gradient boosting",
"(Ke et al., 2017).",
"This model was selected because of its performance with small training datasets and because it can ignore uninformative features, but also for its training speed compared to alternative implementations of gradient boosting methods.",
"Multi-layer perceptron",
"(MLP): As a simple baseline, we built a multi-layer perceptron using three sets of features: a pre-trained contextual representation of the clause",
"(SentBERT; Reimers and Gurevych",
"(2019))",
"; the concatenation of this contextual representation of the clause and a rule-based label",
"(not relying on the previous clauses)",
"; and finally the concatenation of all the features mentioned in section 4.2, without the contextualized representation.",
"LSTM over a sequence of clauses: Since we are working with conversational data, we also wanted to test whether taking into account the previous clauses helps to detect the type of hedge class in the next clause.",
"Formally, we want to infer y i using y i = max y Classes P",
"( y | X",
"( u i )",
", X",
"( u i 1 )",
", ..., X",
"( u i K ))",
", where K is the number of previous clauses that the model will take into account.",
"MLP model presented above infers y i using y i = max y Classes P",
"( y | X",
"( u i ))",
", therefore a difference of performance between the two models would be a sign that using information from the previous clauses could help to detect the hedged formulation in the current clause.",
"We tested a LSTM model with the same representations for clauses as for the MLP model.",
"CNN with attention: Goel et al.",
"(2019)",
"established their best performance on hedge detection using a CNN model with additive attention over word",
"(and not clause)",
"embeddings.",
"Contrary to the MLP and LSTM models mentioned above, this model tries to infer y i using y i = max y Classes P",
"( y | g",
"( w 0 )",
", g",
"( w 1 )",
", ..., g",
"( w L ))",
", with L representing the maximum clause length we allow, and g representing a function that turns the word w j , j [0 , L ] into a vector representation",
"(for more details, please see Adel and Schtze",
"(2017)).",
"BERT: To benefit from deep semantic and contextual representations of the utterances, we also fine-tuned BERT",
"(Devlin et al., 2019)",
"on our classification task.",
"BERT is a pre-trained Transformers encoder",
"(Vaswani et al., 2017)",
"that has significantly improved the state of the art on a number of NLP tasks, including sentiment analysis.",
"It produces a contextual representation of each word in a sentence, making it capable of disambiguating the meaning of words like \"think\" or \"just\" that are representative of certain classes of hedges.",
"BERT, however, is notably hard to interpret.",
"Looking at which features improve the performance of our classification models tells us whether these features are informative or not, but does not explain how these features are used by the models to make a given prediction.",
"We therefore produced a complementary analysis using an interpretability tool.",
"As demonstrated by",
"(Lundberg and Lee, 2017), LightGBM internal feature importance scores are inconsistent with both the model behavior and human intuition, so we instead used a model-agnostic tool.",
"SHAP",
"(Lundberg and Lee, 2017)",
"assigns to each feature an importance value",
"(called Shapley values)",
"for a particular prediction depending on the extent of its contribution",
"(a detailed introduction to Shapley values and SHAP can be found in Molnar",
"(2020)).",
"SHAP is a model-agnostic framework, therefore the values associated with a set of features can be compared across models.",
"It should be noted that SHAP produces explanations on a case-by-case basis, therefore it can both provide local and global explanations.",
"For the Gradient Boosting model, we use an adapted version of SHAP",
"(Lundberg et al., 2018), called TreeSHAP.",
"5.1 Experimental setting To detect the best set of features, we used LightGBM and proceeded incrementally, by adding the group of features we thought to be most likely associated with hedges.",
"We did not consider the risk of relying on a sub-optimal set of features through this procedure because of the strong ability of LightGBM to ignore uninformative features.",
"We use this incremental approach as a way to test our intuition about the performativity of groups of features",
"( i.e. does adding a feature improve the performance of the model)",
"with regard to the task of classification.",
"To compare our models, we trained them on the 4-class task, and looked at the average of the weighted F1-scores for the three hedge classes",
"( i.e. how well the models infer minority classes)",
"that we report here as \"3-classes\", and at the average of the weighted F1-scores for the 4 classes, that we report as \"4-classes\".",
"Details of the hyperparameters and experimental settings are provided in Appendix A. 5.2 Model comparison and feature analysis Overall results: Table 4 presents the results obtained by the 6 models presented in Section 4.3 for the multi-class problem.",
"Best performance",
"(F1-score of 79.0)",
"is obtained with LightGBM leveraging almost all the features.",
"In the appendix",
"(see Table 8 and Table 9)",
"we indicate the confidence intervals to represent the significance of the differences between the models.",
"First, and perhaps surprisingly, we notice that the use of \"Knowledge-Driven\" features based on rules built from linguistic knowledge of hedges in the LightGBM model outperforms the use of pre-trained embeddings within a fine-tuned BERT model",
"(79.0 vs. 70.6), and in the neural baseline from",
"(Goel et al., 2019)",
"(79.0 vs 64.5).",
"The low scores obtained by the LGBM, LSTM and MLP models with pre-trained sentence embeddings versus Knowledge-Driven features might signal that the word patterns characterizing hedges are not salient in these representations",
"(i.e. the 2165 Models KDFeat.",
"distance between \" I think you should add 5.\" and \"You should add 5.\" is",
"short.).",
"KD Features seem to provide a better separability of the classes.",
"The combination of KD features and Pre-trained embeddings does not significantly improve the performance of the models compared to the KD Features only, which suggests that the information from the Pre-trained embeddings is redundant with the one from the KD Features.",
"This result may be due to the high dimensionality of the input vector",
"(868 with PCA on the KD Features; 2500 otherwise).",
"A second finding is that the use of gradient boosting models on top of rule-based classifiers better models the hedge classes.",
"The other machine learning models did not prove to be as effective, except for BERT.",
"Feature analysis using LightGBM: Using the best performing model, Table 5 shows the role of each feature set in the prediction task.",
"The significance of the differences is shown in Table 10 and Table 11.",
"Compared to the rule-based model, the introduction of n-grams significantly improved the performance of our classifier, suggesting that some lexical and syntactic information describing the hedge classes was not present in the rule-based model.",
"Looking at Table 5, we do not observe significant differences between the LGBM model using only the label rule based +",
"(1-grams and 2-grams)",
"and the models incorporating more features.",
"To our surprise, neither the tutoring moves nor the nonverbal features significantly improved the performance of the model.",
"The 2 features were included to index the specific peer tutoring context of these hedges, so this indicates that in future work we might wish to apply the current model to another context of use to see if this model of hedges is more generally applicable than we originally thought.",
"By combining this result with the increased performance of the model using Knowledge-Driven",
"( i.e. explicit)",
"features compared to pre-trained embeddings, it would seem that hedges are above all a lexical phenomenon",
"( i.e. produced by specific lexical elements).",
"We trained the SHAP explanation models on LightGBM with all features.",
"The most informative features",
"(in absolute value)",
"for each class are shown in Table 6, and the plots by class are presented in the Appendix.",
"The most important features seem to be the rule-based labels, which appear in at least the fourth position for three classes",
"(see Table 6), and in the first position for Propositional Hedges and Not hedged classes.",
"Surprisingly, the Rule-Based label does not appear in the top 20 features for Apologizers .",
"However, given that the class rarely appears in the data, the rules seldom activate, so the feature may simply be informative for a very small number of clauses.",
"Unigrams",
"( Oh , Sorry , just , Would , and I )",
"are also present in the 5 top-ranked features.",
"This confirms the findings mentioned in related work for the characterization of the different hedge classes",
"( just with Propositional Hedges , sorry with Apologizer , I with Subjectivizers ).",
"The presence of Oh also has high importance for the characterization of Apologizer",
"(n=2), as illustrated in examples such as \" Oh sorry, that's nine.\".",
"We note that the occurrences of \" Oh sorry \" as a stand-alone clause were excluded by our rule-based model because they do not correspond to an apologizer",
"(they cannot mitigate the content of a proposition if there is no proposition associated).",
"This example illustrates the interest of a machine learning model approach to disambiguate the function of conventional non-propositional phrases like \" Oh sorry \".",
"novel features whose function was not identified in the hedges literature:",
"(i)",
"what LIWC classifies as informal words but that are mostly interjections like ah and oh are strongly associated with Apologizer , as are disfluencies",
"(n=12);",
"(ii)",
"the use of POS tags seems to be very relevant for characterizing the different classes",
"(2-gram of POS tag features 3 occur in the top-ranked features of all the 3 Note that there is strong redundancy between some features of LIWC and the spaCy POS tagger that both produce a \"Pronoun\" category, using a lexicon in the first case, and a neural inference in the second.",
"classes (see Figures in the Appendix).",
"It means that there are some recurring syntactic patterns in each class;",
"(iii) Regarding the utterance size , a clause shorter than the mean is weakly associated with directness (n=17) while a longer clause suggests that it contains a Subjectivizer (n=6) .",
"Apologizers are characterized by a mean clause length (n=5), with few variations from it;",
"(iv) Tutoring moves are not strong predictors of any classes: \"Affirma-tion from tutor\" is the only feature appearing as a predictor of Propositional hedges (n=20).",
"This is consistent with the feature analysis in Table 5, suggesting that tutoring moves do not significantly improve the performance of the classifier;",
"(v) Nonverbal behaviors do not appear as important features for the classification.",
"This is coherent with results from (Goel et al., 2019).",
"Note that prosody might play a role in detecting instructions that trail off, but, as described, paraverbal features were not available;",
"(vi) Would plays an important role in the production of hedges, as it is strongly associated to Propositional hedges (n=2).",
"It is interesting to note that, when designing the rule-based classifier, we saw it decrease in performance when we started to include would in our regular expression patterns, probably because the form is hard to disambiguate for a deterministic system.",
"While exploring the Shapley values associated to each clause, we observed that features like tutoring moves are extremely informative for a very small number of clauses (therefore not significantly influencing the overall performance of the prediction), and more or less not informative for the rest.",
"Inferring the global importance of a feature as a mean across the shapley values in the dataset may not be the only way to explore the behavior of gradient boosting methods.",
"It might be more useful to cluster clauses based on the importance that SHAP gives to that feature in its classification, as this could help discover sub-classes of hedges that are differentiated from the rest by their interaction with a specific feature (in the way that some Apologizers are characterized by an \"oh\").",
"We also note that the explanation model is sensitive to spurious correlations in the dataset, caused by the small representation of some class: for example, \"nine\" (n=7) and \"four\" (n=20) are positive predictors of Apologizers .",
"Through our classification performance experiments, we showed that it is possible to use machine learning methods to diminish the ambiguity of hedges, and that the hybrid approach of using rule-based label features derived from social science (including linguistics) literature within a machine learning model helped significantly to in-crease the model's performance.",
"Nonverbal behaviors and tutoring moves did not provide information at the sentence level; both the performance of the model and the feature contribution analysis suggested that their impact on the model output was not strong.",
"This is consistent with results from Goel et al. (2019).",
"However, in future work we would like to investigate the potential of multimodal patterns when we are able to better model sequentiality ( e.g. , negative feedback followed by a smile).",
"Regarding the SHAP analysis, most of the features that are considered as important are coherent with the definition of the classes ( I for subjectivizers, sorry for apologizers, just for propositional hedges).",
"However, we discovered that features like utterance 2167 size can also serve as indicators of certain classes of hedges.",
"A limitation of SHAP is that it makes a feature independence assumption, which prompts the explanatory model to underestimate the importance of redundant features (like pronouns in our work).",
"In the future we will explore explanatory models capable of taking into account the correlation between features in the dataset like SAGE (Covert et al., 2020), but suited for very imbal-anced datasets.",
"In the domain of peer-tutoring, we would like to be able to further test the link between hedges and rapport, and the link between hedges and learning gains in the subject being tutored.",
"As noted above, this kind of study requires a fine-grained control of the language produced by one of the interlocutors, which is difficult to achieve in a human-human experience.",
"We note that the hedge classifier can be used not just to classify, but also to work towards improving the generation of hedges for tutor agents.",
"In future work we will explore using the classifier to re-rank generation outputs, taking advantage of the recurring syntactic patterns (see",
"(ii) in Section 5.3) to improve the generation process of hedges, and regenerating clauses that don't contain one of these syntactic patterns.",
"Many thanks to members of the ArticuLabo at INRIA Paris for their precious assistance.",
"This work was supported in part by the the French government under management of Agence Nationale de la Recherche as part of the Investissements d'avenir program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute)."
] | [
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"We introduce Recursive Routing Networks ( RRN s), which are modular, adaptable models that learn effectively in diverse environments.",
"RRN s consist of a set of functions, typically organized into a grid, and a meta-learner decision-making component called the router .",
"The model jointly optimizes the parameters of the functions and the meta-learner's policy for routing inputs through those functions.",
"RRN s can be incorporated into existing architectures in a number of ways; we explore adding them to word representation layers, recurrent network hidden layers, and classifier layers.",
"Our evaluation task is natural language inference ( NLI ).",
"Using the MULTINLI corpus, we show that an RRN 's routing decisions reflect the high-level genre structure of that corpus.",
"To show that RRN s can learn to specialize to more fine-grained semantic distinctions, we introduce a new corpus of NLI examples involving implicative predicates, and show that the model components become fine-tuned to the inferential signatures that are characteristic of these predicates.",
"Human cognition has an extraordinary ability to modularize, decomposing problems and solving them by re-composing elements from prior solutions, and this ability is nowhere more evident than in language understanding (Partee, 1984; Janssen, 1997).",
"Most machine learning architectures lack this modularity, which limits their ability to generalize and leaves them susceptible to catastrophic interference (McCloskey and Cohen, 1989) forgetting past skills when acquiring new ones.",
"We propose to address this need for modularity by applying Routing Networks (Rosenbaum et al., 2017) to natural language understanding.",
"Routing Networks are self-organizing networks with two components (Figure 1): a set of function blocks Router 5 Recursive Routing Network 3 2 4 1 6 Figure 1 : Given a premisehypothesis pair x , e.g., t managed to do s and t did s (1), the model needs to learn to predict entails (6).",
"The router (2) estimates the value of applying each of the available sub-functions to the input.",
"Given that manage has certain semantic properties, the router may select (3) a function f 1 specialized to them.",
"The module is then applied (4), yielding f 1 ( x ) .",
"This process repeats (5), now using f 1 ( x ) , selecting and applying another sub-function, and so on, until the router is confident in its prediction (6).",
"which can be applied to transform the input, and a router which makes decisions about which function block to apply next.",
"Here we introduce Recursive Routing Networks ( RRN s), in which there is a single set of composable functions, recursively chosen by the router.",
"RRN s can be applied to different components of modern language understanding architectures with full end-to-end training.",
"The model jointly optimizes the parameters of selected sub-functions and the meta-learner's policy for how to route inputs through those functions.",
"As a result, individual sub-functions specialize to specific inputs, and paths through the grid of sub-functions can similarly be trained to reflect specific concepts and capabilities.",
"RRN s share many intuitions with other modular methods like: Neural Module Networks (Andreas et al., 2015, 2016; Hu et al., 2017), which learn to construct a neural network from pre-defined modules; the Compositional Recursive Learner (Chang et al., 2018), a closely related approach that uti-x Routing across examples Weightsharing Possibledistribution OrthogonalizedKnowledge CompressedKnowledge High Interference HighTransfer i <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> j <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> x j x i x i x j i <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> <latexit sha1_base64=\"7oNlFgnMzVWHsxisJ1PN/N2kgNU=\">AAAB73icbZBNS8NAEIYn9avWr6pHL8EieCqJCHosevFYwX5AG8pmO22XbjZxdyKU0D/hxYMiXv073vw3btsctPWFhYd3ZtiZN0ykMOR5305hbX1jc6u4XdrZ3ds/KB8eNU2cao4NHstYt0NmUAqFDRIksZ1oZFEosRWOb2f11hNqI2L1QJMEg4gNlRgIzsha7S6NkFhP9MoVr+rN5a6Cn0MFctV75a9uP+ZphIq4ZMZ0fC+hIGOaBJc4LXVTgwnjYzbEjkXFIjRBNt936p5Zp+8OYm2fInfu/p7IWGTMJAptZ8RoZJZrM/O/WielwXWQCZWkhIovPhqk0qXYnR3v9oVGTnJigXEt7K4uHzHNONmISjYEf/nkVWheVH3L95eV2k0eRxFO4BTOwYcrqMEd1KEBHCQ8wyu8OY/Oi/PufCxaC04+cwx/5Hz+ACJYkAQ=</latexit> j <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> <latexit sha1_base64=\"Kikg297GOi4umol6sW7ADHqcVTs=\">AAAB73icbZBNS8NAEIYnftb6VfXoJVgETyURQY9FLx4r2A9oQ9lsJ+3azSbuToRS+ie8eFDEq3/Hm//GbZuDtr6w8PDODDvzhqkUhjzv21lZXVvf2CxsFbd3dvf2SweHDZNkmmOdJzLRrZAZlEJhnQRJbKUaWRxKbIbDm2m9+YTaiETd0yjFIGZ9JSLBGVmr1aEBEus+dEtlr+LN5C6Dn0MZctW6pa9OL+FZjIq4ZMa0fS+lYMw0CS5xUuxkBlPGh6yPbYuKxWiC8WzfiXtqnZ4bJdo+Re7M/T0xZrExozi0nTGjgVmsTc3/au2MoqtgLFSaESo+/yjKpEuJOz3e7QmNnOTIAuNa2F1dPmCacbIRFW0I/uLJy9A4r/iW7y7K1es8jgIcwwmcgQ+XUIVbqEEdOEh4hld4cx6dF+fd+Zi3rjj5zBH8kfP5AyPckAU=</latexit> @ L @ i @ L @ j <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> @ L @ i @ L @ j <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> <latexit sha1_base64=\"XlWETJB7zY2hU0hNlCzxtFEo7NY=\">AAACTXiclVHLSsNAFJ3Ud3xVXboZLIIrSUTQZdGNCxcV7AOaUm4mEzs6eTBzI5SQH3QjuPMv3LhQRJy0Wah144GBw7n33Llzxk+l0Og4z1Ztbn5hcWl5xV5dW9/YrG9td3SSKcbbLJGJ6vmguRQxb6NAyXup4hD5knf9u/Oy3r3nSoskvsZxygcR3MQiFAzQSMN6YHuhApZ7KSgUIKkXAY4YyPyyKL6pOOIIQ1FQjwUJ0v+YbothveEcOhPQWeJWpEEqtIb1Jy9IWBbxGJkErfuuk+IgL+cyyQvbyzRPgd3BDe8bGkPE9SCfpFHQfaMENEyUOTHSifrdkUOk9TjyTWe5tv5dK8W/av0Mw9NBLuI0Qx6z6UVhJikmtIyWBkJxhnJsCDAlzK6UjcAEheYDbBOC+/vJs6RzdOgafnXcaJ5VcSyTXbJHDohLTkiTXJAWaRNGHsgLeSPv1qP1an1Yn9PWmlV5dsgP1Ja+AP9vtsE=</latexit> High Interference HighTransfer x Router Router Router x R ( y ) <latexit sha1_base64=\"XF2ffYCoqLT5Qz3O1iyD7+y/b2w=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiG5dV7AOaUG6mk3bo5MHMRIgh+CtuXCji1v9w5984abPQ1gMDh3Pu5Z45XsyZVJb1bSwtr6yurVc2qptb2zu75t5+R0aJILRNIh6JngeSchbStmKK014sKAQep11vcl343QcqJIvCe5XG1A1gFDKfEVBaGpiHTgBqTIBnd3ndGYPK0vx0YNashjUFXiR2SWqoRGtgfjnDiCQBDRXhIGXftmLlZiAUI5zmVSeRNAYygRHtaxpCQKWbTdPn+EQrQ+xHQr9Q4an6eyODQMo08PRkkVXOe4X4n9dPlH/pZiyME0VDMjvkJxyrCBdV4CETlCieagJEMJ0VkzEIIEoXVtUl2PNfXiSds4at+e15rXlV1lFBR+gY1ZGNLlAT3aAWaiOCHtEzekVvxpPxYrwbH7PRJaPcOUB/YHz+AJsTlU4=</latexit> <latexit sha1_base64=\"XF2ffYCoqLT5Qz3O1iyD7+y/b2w=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiG5dV7AOaUG6mk3bo5MHMRIgh+CtuXCji1v9w5984abPQ1gMDh3Pu5Z45XsyZVJb1bSwtr6yurVc2qptb2zu75t5+R0aJILRNIh6JngeSchbStmKK014sKAQep11vcl343QcqJIvCe5XG1A1gFDKfEVBaGpiHTgBqTIBnd3ndGYPK0vx0YNashjUFXiR2SWqoRGtgfjnDiCQBDRXhIGXftmLlZiAUI5zmVSeRNAYygRHtaxpCQKWbTdPn+EQrQ+xHQr9Q4an6eyODQMo08PRkkVXOe4X4n9dPlH/pZiyME0VDMjvkJxyrCBdV4CETlCieagJEMJ0VkzEIIEoXVtUl2PNfXiSds4at+e15rXlV1lFBR+gY1ZGNLlAT3aAWaiOCHtEzekVvxpPxYrwbH7PRJaPcOUB/YHz+AJsTlU4=</latexit> <latexit sha1_base64=\"XF2ffYCoqLT5Qz3O1iyD7+y/b2w=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiG5dV7AOaUG6mk3bo5MHMRIgh+CtuXCji1v9w5984abPQ1gMDh3Pu5Z45XsyZVJb1bSwtr6yurVc2qptb2zu75t5+R0aJILRNIh6JngeSchbStmKK014sKAQep11vcl343QcqJIvCe5XG1A1gFDKfEVBaGpiHTgBqTIBnd3ndGYPK0vx0YNashjUFXiR2SWqoRGtgfjnDiCQBDRXhIGXftmLlZiAUI5zmVSeRNAYygRHtaxpCQKWbTdPn+EQrQ+xHQr9Q4an6eyODQMo08PRkkVXOe4X4n9dPlH/pZiyME0VDMjvkJxyrCBdV4CETlCieagJEMJ0VkzEIIEoXVtUl2PNfXiSds4at+e15rXlV1lFBR+gY1ZGNLlAT3aAWaiOCHtEzekVvxpPxYrwbH7PRJaPcOUB/YHz+AJsTlU4=</latexit> <latexit sha1_base64=\"XF2ffYCoqLT5Qz3O1iyD7+y/b2w=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiG5dV7AOaUG6mk3bo5MHMRIgh+CtuXCji1v9w5984abPQ1gMDh3Pu5Z45XsyZVJb1bSwtr6yurVc2qptb2zu75t5+R0aJILRNIh6JngeSchbStmKK014sKAQep11vcl343QcqJIvCe5XG1A1gFDKfEVBaGpiHTgBqTIBnd3ndGYPK0vx0YNashjUFXiR2SWqoRGtgfjnDiCQBDRXhIGXftmLlZiAUI5zmVSeRNAYygRHtaxpCQKWbTdPn+EQrQ+xHQr9Q4an6eyODQMo08PRkkVXOe4X4n9dPlH/pZiyME0VDMjvkJxyrCBdV4CETlCieagJEMJ0VkzEIIEoXVtUl2PNfXiSds4at+e15rXlV1lFBR+gY1ZGNLlAT3aAWaiOCHtEzekVvxpPxYrwbH7PRJaPcOUB/YHz+AJsTlU4=</latexit> @ L @f 2 <latexit sha1_base64=\"XMVribdXl5pxnvL6uUc6qMR5exQ=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiRF0GXRjQsXFewDmhBuppN26OTBzEQoIf/gxl9x40IRt27c+TdO2iDaemDgcM69d+49fsKZVJb1ZSwtr6yurVc2qptb2zu75t5+R8apILRNYh6Lng+SchbRtmKK014iKIQ+p11/fFX43XsqJIujOzVJqBvCMGIBI6C05JmnTiCAZE4CQjHg2AlBjQjw7CbPf9Qs8Bp57pk1q25NgReJXZIaKtHyzE9nEJM0pJEiHKTs21ai3KyYSTjNq04qaQJkDEPa1zSCkEo3m96U42OtDHAQC/0ihafq744MQiknoa8ri5XlvFeI/3n9VAUXbsaiJFU0IrOPgpRjFeMiIDxgghLFJ5oAEUzviskIdEhKx1jVIdjzJy+STqNua357VmtelnFU0CE6QifIRueoia5RC7URQQ/oCb2gV+PReDbejPdZ6ZJR9hygPzA+vgG+TZ9S</latexit> <latexit sha1_base64=\"XMVribdXl5pxnvL6uUc6qMR5exQ=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiRF0GXRjQsXFewDmhBuppN26OTBzEQoIf/gxl9x40IRt27c+TdO2iDaemDgcM69d+49fsKZVJb1ZSwtr6yurVc2qptb2zu75t5+R8apILRNYh6Lng+SchbRtmKK014iKIQ+p11/fFX43XsqJIujOzVJqBvCMGIBI6C05JmnTiCAZE4CQjHg2AlBjQjw7CbPf9Qs8Bp57pk1q25NgReJXZIaKtHyzE9nEJM0pJEiHKTs21ai3KyYSTjNq04qaQJkDEPa1zSCkEo3m96U42OtDHAQC/0ihafq744MQiknoa8ri5XlvFeI/3n9VAUXbsaiJFU0IrOPgpRjFeMiIDxgghLFJ5oAEUzviskIdEhKx1jVIdjzJy+STqNua357VmtelnFU0CE6QifIRueoia5RC7URQQ/oCb2gV+PReDbejPdZ6ZJR9hygPzA+vgG+TZ9S</latexit> <latexit sha1_base64=\"XMVribdXl5pxnvL6uUc6qMR5exQ=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiRF0GXRjQsXFewDmhBuppN26OTBzEQoIf/gxl9x40IRt27c+TdO2iDaemDgcM69d+49fsKZVJb1ZSwtr6yurVc2qptb2zu75t5+R8apILRNYh6Lng+SchbRtmKK014iKIQ+p11/fFX43XsqJIujOzVJqBvCMGIBI6C05JmnTiCAZE4CQjHg2AlBjQjw7CbPf9Qs8Bp57pk1q25NgReJXZIaKtHyzE9nEJM0pJEiHKTs21ai3KyYSTjNq04qaQJkDEPa1zSCkEo3m96U42OtDHAQC/0ihafq744MQiknoa8ri5XlvFeI/3n9VAUXbsaiJFU0IrOPgpRjFeMiIDxgghLFJ5oAEUzviskIdEhKx1jVIdjzJy+STqNua357VmtelnFU0CE6QifIRueoia5RC7URQQ/oCb2gV+PReDbejPdZ6ZJR9hygPzA+vgG+TZ9S</latexit> <latexit sha1_base64=\"XMVribdXl5pxnvL6uUc6qMR5exQ=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiRF0GXRjQsXFewDmhBuppN26OTBzEQoIf/gxl9x40IRt27c+TdO2iDaemDgcM69d+49fsKZVJb1ZSwtr6yurVc2qptb2zu75t5+R8apILRNYh6Lng+SchbRtmKK014iKIQ+p11/fFX43XsqJIujOzVJqBvCMGIBI6C05JmnTiCAZE4CQjHg2AlBjQjw7CbPf9Qs8Bp57pk1q25NgReJXZIaKtHyzE9nEJM0pJEiHKTs21ai3KyYSTjNq04qaQJkDEPa1zSCkEo3m96U42OtDHAQC/0ihafq744MQiknoa8ri5XlvFeI/3n9VAUXbsaiJFU0IrOPgpRjFeMiIDxgghLFJ5oAEUzviskIdEhKx1jVIdjzJy+STqNua357VmtelnFU0CE6QifIRueoia5RC7URQQ/oCb2gV+PReDbejPdZ6ZJR9hygPzA+vgG+TZ9S</latexit> y = f 2 ( f 1 ( f 3 ( x ))) <latexit sha1_base64=\"9JuqyoDLh7XfXgRAEKM25NHE49o=\">AAACA3icbZDLSsNAFIYnXmu9Rd3pZrAI7aYkVdCNUHTjsoK9QBvCZDpph04uzJyIIRTc+CpuXCji1pdw59s4bbPQ1h8OfPznHGbO78WCK7Csb2NpeWV1bb2wUdzc2t7ZNff2WypKJGVNGolIdjyimOAhawIHwTqxZCTwBGt7o+tJv33PpOJReAdpzJyADELuc0pAW6552BsSyNIxvsS+Wyv7rq3rtPxQqVRcs2RVranwItg5lFCuhmt+9foRTQIWAhVEqa5txeBkRAKngo2LvUSxmNARGbCuxpAETDnZ9IYxPtFOH/uR1BUCnrq/NzISKJUGnp4MCAzVfG9i/tfrJuBfOBkP4wRYSGcP+YnAEOFJILjPJaMgUg2ESq7/iumQSEJBx1bUIdjzJy9Cq1a1Nd+elepXeRwFdISOURnZ6BzV0Q1qoCai6BE9o1f0ZjwZL8a78TEbXTLynQP0R8bnD5aElYY=</latexit> <latexit sha1_base64=\"9JuqyoDLh7XfXgRAEKM25NHE49o=\">AAACA3icbZDLSsNAFIYnXmu9Rd3pZrAI7aYkVdCNUHTjsoK9QBvCZDpph04uzJyIIRTc+CpuXCji1pdw59s4bbPQ1h8OfPznHGbO78WCK7Csb2NpeWV1bb2wUdzc2t7ZNff2WypKJGVNGolIdjyimOAhawIHwTqxZCTwBGt7o+tJv33PpOJReAdpzJyADELuc0pAW6552BsSyNIxvsS+Wyv7rq3rtPxQqVRcs2RVranwItg5lFCuhmt+9foRTQIWAhVEqa5txeBkRAKngo2LvUSxmNARGbCuxpAETDnZ9IYxPtFOH/uR1BUCnrq/NzISKJUGnp4MCAzVfG9i/tfrJuBfOBkP4wRYSGcP+YnAEOFJILjPJaMgUg2ESq7/iumQSEJBx1bUIdjzJy9Cq1a1Nd+elepXeRwFdISOURnZ6BzV0Q1qoCai6BE9o1f0ZjwZL8a78TEbXTLynQP0R8bnD5aElYY=</latexit> <latexit sha1_base64=\"9JuqyoDLh7XfXgRAEKM25NHE49o=\">AAACA3icbZDLSsNAFIYnXmu9Rd3pZrAI7aYkVdCNUHTjsoK9QBvCZDpph04uzJyIIRTc+CpuXCji1pdw59s4bbPQ1h8OfPznHGbO78WCK7Csb2NpeWV1bb2wUdzc2t7ZNff2WypKJGVNGolIdjyimOAhawIHwTqxZCTwBGt7o+tJv33PpOJReAdpzJyADELuc0pAW6552BsSyNIxvsS+Wyv7rq3rtPxQqVRcs2RVranwItg5lFCuhmt+9foRTQIWAhVEqa5txeBkRAKngo2LvUSxmNARGbCuxpAETDnZ9IYxPtFOH/uR1BUCnrq/NzISKJUGnp4MCAzVfG9i/tfrJuBfOBkP4wRYSGcP+YnAEOFJILjPJaMgUg2ESq7/iumQSEJBx1bUIdjzJy9Cq1a1Nd+elepXeRwFdISOURnZ6BzV0Q1qoCai6BE9o1f0ZjwZL8a78TEbXTLynQP0R8bnD5aElYY=</latexit> <latexit sha1_base64=\"9JuqyoDLh7XfXgRAEKM25NHE49o=\">AAACA3icbZDLSsNAFIYnXmu9Rd3pZrAI7aYkVdCNUHTjsoK9QBvCZDpph04uzJyIIRTc+CpuXCji1pdw59s4bbPQ1h8OfPznHGbO78WCK7Csb2NpeWV1bb2wUdzc2t7ZNff2WypKJGVNGolIdjyimOAhawIHwTqxZCTwBGt7o+tJv33PpOJReAdpzJyADELuc0pAW6552BsSyNIxvsS+Wyv7rq3rtPxQqVRcs2RVranwItg5lFCuhmt+9foRTQIWAhVEqa5txeBkRAKngo2LvUSxmNARGbCuxpAETDnZ9IYxPtFOH/uR1BUCnrq/NzISKJUGnp4MCAzVfG9i/tfrJuBfOBkP4wRYSGcP+YnAEOFJILjPJaMgUg2ESq7/iumQSEJBx1bUIdjzJy9Cq1a1Nd+elepXeRwFdISOURnZ6BzV0Q1qoCai6BE9o1f0ZjwZL8a78TEbXTLynQP0R8bnD5aElYY=</latexit> @ L @f 3 <latexit sha1_base64=\"CITt/kw7q0ejtIzO5trHyWTUmlk=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiQq6LLoxoWLCvYBTQg300k7dPJgZiKUkH9w46+4caGIWzfu/BsnbRBtPTBwOOfeO/ceP+FMKsv6MhYWl5ZXVitr1fWNza1tc2e3LeNUENoiMY9F1wdJOYtoSzHFaTcRFEKf044/uir8zj0VksXRnRon1A1hELGAEVBa8sxjJxBAMicBoRhw7ISghgR4dpPnP2oWeKd57pk1q25NgOeJXZIaKtH0zE+nH5M0pJEiHKTs2Vai3KyYSTjNq04qaQJkBAPa0zSCkEo3m9yU40Ot9HEQC/0ihSfq744MQinHoa8ri5XlrFeI/3m9VAUXbsaiJFU0ItOPgpRjFeMiINxnghLFx5oAEUzviskQdEhKx1jVIdizJ8+T9knd1vz2rNa4LOOooH10gI6Qjc5RA12jJmohgh7QE3pBr8aj8Wy8Ge/T0gWj7NlDf2B8fAO/059T</latexit> <latexit sha1_base64=\"CITt/kw7q0ejtIzO5trHyWTUmlk=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiQq6LLoxoWLCvYBTQg300k7dPJgZiKUkH9w46+4caGIWzfu/BsnbRBtPTBwOOfeO/ceP+FMKsv6MhYWl5ZXVitr1fWNza1tc2e3LeNUENoiMY9F1wdJOYtoSzHFaTcRFEKf044/uir8zj0VksXRnRon1A1hELGAEVBa8sxjJxBAMicBoRhw7ISghgR4dpPnP2oWeKd57pk1q25NgOeJXZIaKtH0zE+nH5M0pJEiHKTs2Vai3KyYSTjNq04qaQJkBAPa0zSCkEo3m9yU40Ot9HEQC/0ihSfq744MQinHoa8ri5XlrFeI/3m9VAUXbsaiJFU0ItOPgpRjFeMiINxnghLFx5oAEUzviskQdEhKx1jVIdizJ8+T9knd1vz2rNa4LOOooH10gI6Qjc5RA12jJmohgh7QE3pBr8aj8Wy8Ge/T0gWj7NlDf2B8fAO/059T</latexit> <latexit sha1_base64=\"CITt/kw7q0ejtIzO5trHyWTUmlk=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiQq6LLoxoWLCvYBTQg300k7dPJgZiKUkH9w46+4caGIWzfu/BsnbRBtPTBwOOfeO/ceP+FMKsv6MhYWl5ZXVitr1fWNza1tc2e3LeNUENoiMY9F1wdJOYtoSzHFaTcRFEKf044/uir8zj0VksXRnRon1A1hELGAEVBa8sxjJxBAMicBoRhw7ISghgR4dpPnP2oWeKd57pk1q25NgOeJXZIaKtH0zE+nH5M0pJEiHKTs2Vai3KyYSTjNq04qaQJkBAPa0zSCkEo3m9yU40Ot9HEQC/0ihSfq744MQinHoa8ri5XlrFeI/3m9VAUXbsaiJFU0ItOPgpRjFeMiINxnghLFx5oAEUzviskQdEhKx1jVIdizJ8+T9knd1vz2rNa4LOOooH10gI6Qjc5RA12jJmohgh7QE3pBr8aj8Wy8Ge/T0gWj7NlDf2B8fAO/059T</latexit> <latexit sha1_base64=\"CITt/kw7q0ejtIzO5trHyWTUmlk=\">AAACE3icbVDLSsNAFJ34rPUVdelmsAjioiQq6LLoxoWLCvYBTQg300k7dPJgZiKUkH9w46+4caGIWzfu/BsnbRBtPTBwOOfeO/ceP+FMKsv6MhYWl5ZXVitr1fWNza1tc2e3LeNUENoiMY9F1wdJOYtoSzHFaTcRFEKf044/uir8zj0VksXRnRon1A1hELGAEVBa8sxjJxBAMicBoRhw7ISghgR4dpPnP2oWeKd57pk1q25NgOeJXZIaKtH0zE+nH5M0pJEiHKTs2Vai3KyYSTjNq04qaQJkBAPa0zSCkEo3m9yU40Ot9HEQC/0ihSfq744MQinHoa8ri5XlrFeI/3m9VAUXbsaiJFU0ItOPgpRjFeMiINxnghLFx5oAEUzviskQdEhKx1jVIdizJ8+T9knd1vz2rNa4LOOooH10gI6Qjc5RA12jJmohgh7QE3pBr8aj8Wy8Ge/T0gWj7NlDf2B8fAO/059T</latexit> @ L @f 1 <latexit sha1_base64=\"BRuxbiasg2dbOQyMtk6JM/djXTs=\">AAACE3icbVDLSsNAFL2pr1pfUZduBosgLkoigi6Lbly4qGAf0IQwmU7aoZMHMxOhhPyDG3/FjQtF3Lpx5984aYNo64GBwzn33rn3+AlnUlnWl1FZWl5ZXauu1zY2t7Z3zN29joxTQWibxDwWPR9LyllE24opTnuJoDj0Oe3646vC795TIVkc3alJQt0QDyMWMIKVljzzxAkEJpmTYKEY5sgJsRoRzLObPP9Rs8Cz89wz61bDmgItErskdSjR8sxPZxCTNKSRIhxL2betRLlZMZNwmtecVNIEkzEe0r6mEQ6pdLPpTTk60soABbHQL1Joqv7uyHAo5ST0dWWxspz3CvE/r5+q4MLNWJSkikZk9lGQcqRiVASEBkxQovhEE0wE07siMsI6JKVjrOkQ7PmTF0nntGFrfntWb16WcVThAA7hGGw4hyZcQwvaQOABnuAFXo1H49l4M95npRWj7NmHPzA+vgG8x59R</latexit> <latexit sha1_base64=\"BRuxbiasg2dbOQyMtk6JM/djXTs=\">AAACE3icbVDLSsNAFL2pr1pfUZduBosgLkoigi6Lbly4qGAf0IQwmU7aoZMHMxOhhPyDG3/FjQtF3Lpx5984aYNo64GBwzn33rn3+AlnUlnWl1FZWl5ZXauu1zY2t7Z3zN29joxTQWibxDwWPR9LyllE24opTnuJoDj0Oe3646vC795TIVkc3alJQt0QDyMWMIKVljzzxAkEJpmTYKEY5sgJsRoRzLObPP9Rs8Cz89wz61bDmgItErskdSjR8sxPZxCTNKSRIhxL2betRLlZMZNwmtecVNIEkzEe0r6mEQ6pdLPpTTk60soABbHQL1Joqv7uyHAo5ST0dWWxspz3CvE/r5+q4MLNWJSkikZk9lGQcqRiVASEBkxQovhEE0wE07siMsI6JKVjrOkQ7PmTF0nntGFrfntWb16WcVThAA7hGGw4hyZcQwvaQOABnuAFXo1H49l4M95npRWj7NmHPzA+vgG8x59R</latexit> <latexit sha1_base64=\"BRuxbiasg2dbOQyMtk6JM/djXTs=\">AAACE3icbVDLSsNAFL2pr1pfUZduBosgLkoigi6Lbly4qGAf0IQwmU7aoZMHMxOhhPyDG3/FjQtF3Lpx5984aYNo64GBwzn33rn3+AlnUlnWl1FZWl5ZXauu1zY2t7Z3zN29joxTQWibxDwWPR9LyllE24opTnuJoDj0Oe3646vC795TIVkc3alJQt0QDyMWMIKVljzzxAkEJpmTYKEY5sgJsRoRzLObPP9Rs8Cz89wz61bDmgItErskdSjR8sxPZxCTNKSRIhxL2betRLlZMZNwmtecVNIEkzEe0r6mEQ6pdLPpTTk60soABbHQL1Joqv7uyHAo5ST0dWWxspz3CvE/r5+q4MLNWJSkikZk9lGQcqRiVASEBkxQovhEE0wE07siMsI6JKVjrOkQ7PmTF0nntGFrfntWb16WcVThAA7hGGw4hyZcQwvaQOABnuAFXo1H49l4M95npRWj7NmHPzA+vgG8x59R</latexit> <latexit sha1_base64=\"BRuxbiasg2dbOQyMtk6JM/djXTs=\">AAACE3icbVDLSsNAFL2pr1pfUZduBosgLkoigi6Lbly4qGAf0IQwmU7aoZMHMxOhhPyDG3/FjQtF3Lpx5984aYNo64GBwzn33rn3+AlnUlnWl1FZWl5ZXauu1zY2t7Z3zN29joxTQWibxDwWPR9LyllE24opTnuJoDj0Oe3646vC795TIVkc3alJQt0QDyMWMIKVljzzxAkEJpmTYKEY5sgJsRoRzLObPP9Rs8Cz89wz61bDmgItErskdSjR8sxPZxCTNKSRIhxL2betRLlZMZNwmtecVNIEkzEe0r6mEQ6pdLPpTTk60soABbHQL1Joqv7uyHAo5ST0dWWxspz3CvE/r5+q4MLNWJSkikZk9lGQcqRiVASEBkxQovhEE0wE07siMsI6JKVjrOkQ7PmTF0nntGFrfntWb16WcVThAA7hGGw4hyZcQwvaQOABnuAFXo1H49l4M95npRWj7NmHPzA+vgG8x59R</latexit> L ( y ) <latexit sha1_base64=\"rEC81UtAemRDLO3rEc4EGJN/sFk=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiGxcuKtgHNKFMppN26OTBzI0QQ/BX3LhQxK3/4c6/cdJmoa0HBg7n3Ms9c7xYcAWW9W0sLa+srq1XNqqbW9s7u+befkdFiaSsTSMRyZ5HFBM8ZG3gIFgvlowEnmBdb3Jd+N0HJhWPwntIY+YGZBRyn1MCWhqYh05AYEyJyG7zujMmkKX56cCsWQ1rCrxI7JLUUInWwPxyhhFNAhYCFUSpvm3F4GZEAqeC5VUnUSwmdEJGrK9pSAKm3GyaPscnWhliP5L6hYCn6u+NjARKpYGnJ4usat4rxP+8fgL+pZvxME6AhXR2yE8EhggXVeAhl4yCSDUhVHKdFdMxkYSCLqyqS7Dnv7xIOmcNW/O781rzqqyjgo7QMaojG12gJrpBLdRGFD2iZ/SK3own48V4Nz5mo0tGuXOA/sD4/AGRv5VI</latexit> <latexit sha1_base64=\"rEC81UtAemRDLO3rEc4EGJN/sFk=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiGxcuKtgHNKFMppN26OTBzI0QQ/BX3LhQxK3/4c6/cdJmoa0HBg7n3Ms9c7xYcAWW9W0sLa+srq1XNqqbW9s7u+befkdFiaSsTSMRyZ5HFBM8ZG3gIFgvlowEnmBdb3Jd+N0HJhWPwntIY+YGZBRyn1MCWhqYh05AYEyJyG7zujMmkKX56cCsWQ1rCrxI7JLUUInWwPxyhhFNAhYCFUSpvm3F4GZEAqeC5VUnUSwmdEJGrK9pSAKm3GyaPscnWhliP5L6hYCn6u+NjARKpYGnJ4usat4rxP+8fgL+pZvxME6AhXR2yE8EhggXVeAhl4yCSDUhVHKdFdMxkYSCLqyqS7Dnv7xIOmcNW/O781rzqqyjgo7QMaojG12gJrpBLdRGFD2iZ/SK3own48V4Nz5mo0tGuXOA/sD4/AGRv5VI</latexit> <latexit sha1_base64=\"rEC81UtAemRDLO3rEc4EGJN/sFk=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiGxcuKtgHNKFMppN26OTBzI0QQ/BX3LhQxK3/4c6/cdJmoa0HBg7n3Ms9c7xYcAWW9W0sLa+srq1XNqqbW9s7u+befkdFiaSsTSMRyZ5HFBM8ZG3gIFgvlowEnmBdb3Jd+N0HJhWPwntIY+YGZBRyn1MCWhqYh05AYEyJyG7zujMmkKX56cCsWQ1rCrxI7JLUUInWwPxyhhFNAhYCFUSpvm3F4GZEAqeC5VUnUSwmdEJGrK9pSAKm3GyaPscnWhliP5L6hYCn6u+NjARKpYGnJ4usat4rxP+8fgL+pZvxME6AhXR2yE8EhggXVeAhl4yCSDUhVHKdFdMxkYSCLqyqS7Dnv7xIOmcNW/O781rzqqyjgo7QMaojG12gJrpBLdRGFD2iZ/SK3own48V4Nz5mo0tGuXOA/sD4/AGRv5VI</latexit> <latexit sha1_base64=\"rEC81UtAemRDLO3rEc4EGJN/sFk=\">AAAB/XicbVDLSsNAFJ34rPUVHzs3g0Wom5KIoMuiGxcuKtgHNKFMppN26OTBzI0QQ/BX3LhQxK3/4c6/cdJmoa0HBg7n3Ms9c7xYcAWW9W0sLa+srq1XNqqbW9s7u+befkdFiaSsTSMRyZ5HFBM8ZG3gIFgvlowEnmBdb3Jd+N0HJhWPwntIY+YGZBRyn1MCWhqYh05AYEyJyG7zujMmkKX56cCsWQ1rCrxI7JLUUInWwPxyhhFNAhYCFUSpvm3F4GZEAqeC5VUnUSwmdEJGrK9pSAKm3GyaPscnWhliP5L6hYCn6u+NjARKpYGnJ4usat4rxP+8fgL+pZvxME6AhXR2yE8EhggXVeAhl4yCSDUhVHKdFdMxkYSCLqyqS7Dnv7xIOmcNW/O781rzqqyjgo7QMaojG12gJrpBLdRGFD2iZ/SK3own48V4Nz5mo0tGuXOA/sD4/AGRv5VI</latexit> Modular Backpropagation Forward Decision Feedback Figure 2 : Left: RRN learning.",
"Forward (grey background) and backward passes (white background) during the training of a rollout of an RRN with three blocks (width) and limited to depth three (height).",
"In this unrolled version, a path is the sequence of function blocks selected by the router.",
"Right: The dilemma of weight sharing and the transferinterference trade-off.",
"When examples are learned using largely separate weights, there is a thin possible distribution of gradient dot products that limits interference, which unfortunately also limits transfer.",
"This is beneficial for unrelated examples, but frustrates the learning of related ones.",
"Conversely, when examples are learned using largely shared weights, there is a wide possible distribution of gradient dot products that allows for both high magnitude transfer and interference.",
"This is beneficial for related examples, but maximizes interference when examples are unrelated.",
"With RRN s, we provide our network with an unprecedented degree of leverage to learn to navigate this trade-off.",
"lizes curriculum learning to solve arithmetic and vision problems; and ModularNetworks (Kirsch et al., 2018), which extend Routing Networks with an EM-like training approach.",
"We show how to incorporate RRN s into different neural components (word representation layers, recurrent network hidden layers, classifier layers), and we study their application to natural language inference ( NLI ), in which premisehypothesis pairs are labeled for whether the premise entails, contradicts, or is neutral with respect to the hypothesis.",
"We chose this task because reasoning in natural language involves context-sensitive interpretation of words and sentences as well as compositional structure.",
"We make use of the MULTINLI corpus (Williams et al., 2018), which includes text from multiple genres that we expect to condition linguistic senses in complex ways.",
"Our experiments show that RRN s learn policies and components that reflect this genre structure, which leads to superior performance.",
"We also introduce a new corpus of NLI examples involving implicative constructions like manage to , be able to , and fail to (Karttunen, 1971, 2012).",
"This corpus follows the design of many recent NLI corpora, but with the added challenges of reasoning about implicatives, which have logical signatures that interact compositionally with each other and with surrounding semantic operators.",
"Our experiments show that trained RRN model components become fine-tuned to these signatures.",
"Finally, we introduce an extension of the framework which leverages a Dispatcher for situations where meta-information is not available at test time, obtaining very promising results.",
"RRN s are a natural extension of the Routing Networks introduced by Rosenbaum et al. (2017), themselves part of a larger class of conditional computation models (Bengio et al., 2015).",
"Routing Networks combine trainable modules with a meta-learner called a router , which is typically trained using reinforcement learning, though any hard-decision making algorithm could be used.",
"Given an example, the router selects a module and applies it, yielding an activation that can be the input to another routing iteration.",
"As Rosenbaum et al. observe, this can be modelled as a recursive process where the router is applied repeatedly to define complex paths through the modules.",
"The final routed output is passed to additional layers or interpreted as the output of the network (Figure 1).",
"We are motivated to consider Routing Networks, and in particular recursive variants, in order to address the transferinterference trade-off (Riemer et al., 2019).",
"Vanilla neural networks latently attempt to solve a very difficult problem of deciding when to orthogonalize and compress knowledge (Figure 2, right).",
"When two examples are learned with the same weights, there is high potential for transfer as well as interference.",
"This is good for related examples because it maximizes the potential for transfer.",
"However, weight sharing is bad for unrelated examples, as it increases the likelihood of interference.",
"In contrast, when two examples are learned using different weights, there is low potential for transfer and interference.",
"This is beneficial for unrelated examples, but limits the potential for learning about commonalities between related ones.",
"RRN s extend vanilla neural networks by granting them the leverage to navigate this trade-off by making global functional decisions at the module level.",
"As a result, RRN s explicitly make decisions to compress or orthogonalize knowledge between examples by deciding whether to share specific weights.",
"Thus far, hard-selection routing has not been applied to language domains.",
"Zaremoodi et al. (2018) introduced a soft version of routing that falls within a larger class of Mixtures of Experts ( M o E ) models (Jacobs et al., 1991).",
"However, M o E models differ from RRN s in two crucial ways.",
"First, M o E models generally do not consider the recursive application of functions.",
"The promise is that we can compose functions to reflect the compositional aspects of a problem.",
"Imagine we have a sentence encoding and we want to answer a particular question.",
"We can now condition the router on the question, so that it applies exactly the functions required that translate the encoding to extract the answer.",
"Second, M o E models do not allow for nearly the level of specialization as those based on routing, because they do not eliminate weight sharing across modules and instead only gate the sharing.",
"In practice, this still leads to significant interference across tasks.",
"Composition of modules has been widely explored for question answering, starting with Andreas et al. (2015).",
"Andreas et al. (2016) learn to assemble a deep neural network on-the-fly from a pre-specified inventory of neural modules using tree-structured layouts based on linguistic analysis.",
"Chang et al. (2018) also explore routing, but with problems described in pseudo-language rather than natural language.",
"We now provide a formal presentation of RRN s and show how to incorporate them into existing neural architectures.",
"Formally, the router bases its decision on the tuple (cid:104) f ( x ) , m (cid:105) , where x is the sample, f ( x ) = f i f k ( x ) is the composition of all applied functions from the set of all available functions F = { f 1 , . . . , f b } , and m is a vector that contains meta-information that can be utilized by the router for example, an embedding for its genre or semantic classification.",
"The router consists of a policy ( F|(cid:104) f ( x ) , m (cid:105) ) that determines which of the functions in F to apply for a given state (cid:104) f ( x ) , m (cid:105) .",
"Once a new function f k is selected, the latest activation (and, thereby, the state) gets updated to f k f ( x ) , and the process repeats.",
"1 We focus on the recursive case where the set F is generally the same for each decision.",
"Apart from being the more general formulation, this form of recursivity allows more weight sharing and better compositional generalization.",
"While there is no necessary upper limit to the number of selections, we found that limiting the number to a maximum of d makes the learning more stable.",
"Consequently, the total number of possible routing paths available is b d .",
"Decision Making One of the most important design choices for the router concerns the meta-learning algorithm.",
"Routing is limited to hard decision-making algorithms, in particular a stochastic reparameterization using the Gumbel-Softmax function (Jang et al., 2016; Maddison et al., 2016) and reinforcement learning ( RL ) algorithms.",
"As with other recent approaches to compositional architectures with hard selections (Ben-gio et al., 2015; Shazeer et al., 2017; Kirsch et al., 2018), routing networks can suffer from module collapse , a lack of diversity in the router's decision making.",
"This general problem, common in architectures that jointly train decision making and functions, stems from an early over-estimation of the value of specific functions.",
"This leads to these functions being selected and trained more than others, until these functions are indeed so good 1 Some architectures have constraints that prevent repeated selection of the same functions in F .",
"In these cases, the router has to be restricted to select from only a compatible subset of F .",
"that others are not considered.",
"To mitigate this, we start with a strategy proposed by Rosenbaum et al. (2017): exclusively conditioning the router on meta-information provided by the datasets and storing the router's policy in a table.",
"We found that this stabilizes early training.",
"During later training, we can replace the meta-information hard assignment rule with a meta-information guesser.",
"We implement this guesser with what Rosenbaum et al. call a Dispatcher , an approximation-based sub-policy that estimates the meta-information for each sample and passes it to the corresponding policies (Section 5).",
"Losses and Training Once the routing procedure terminates, the selected modules are trained using standard optimization techniques, such as backpropagation with stochastic gradient descent ( SGD ).",
"The decision-making algorithm is either also trained with SGD (Gumbel reparameterization) or trained using reinforcement learning.",
"As we focus on classification problems, our core loss function L class ( y, y ) is the standard cross-entropy loss.",
"We backpropagate this loss along the function path chosen by the router for input x .",
"Reinforcement Learning Rewards For RL training of the router, we define the reward r for the reinforcement learner to be the sum of the negative classification loss and an additional regularization reward that encourages diversity in the selection of modules: r = L class ( y, y ) + r reg (1) For discussion of this reward, see Appendix A.1.",
"If not otherwise mentioned, we assume that the routed modules are all fully connected layers with the same input and output dimensions.",
"We furthermore assume that the router is always selecting from the same set of functions when defining paths through the routing layers, as this straightforwardly allows recursion in the relevant sense.",
"Routing Classifiers The simplest method we explore involves routing the layers of a classifier.",
"To do this, we define a fixed set of b fully connected layers ( FC ), each of the same dimensionality, and we allow the router to choose any path through this set up to a maximum length d (Fig-ure 3, top center).",
"The final activation produced by the chosen path is then densely connected to a non-routed output layer.",
"To further model each example's dependence on its class, we hard-select this output layer based on the meta-information label m for x : W m ( FC k 1 FC k d ( x )) + b m (2) Routing RNN Encoders In routing RNN s we focus on their two core transformations: from the hidden state at time step t k to time step t k +1 , and from the input to the hidden state.",
"We have designed routing architectures for both of them.",
"In this paper, we start with LSTM s, but the techniques are straightforward to adapt to other cell types.",
"Figure 3, top right, shows an architecture where we route the input-to-hidden ( I 2 H ) transformation.",
"Similarly, Figure 3, bottom right, shows an architecture where we route the hidden-to-hidden ( H 2 H ) transformation.",
"While these transformations are often designed as single fully connected layers, we allow a recursive application of d steps from a selection of b modules to be applied.",
"The selections for the transformations f, i, o, c are tied to be the same.",
"The corresponding transformation (in form of their weight matrices W f/i/o/c for I 2 H and U f/i/o/c for H 2 H ) becomes: K = FC k FC m ( x t ) , K { W f , W i , W o , W c , U f , U i , U o , U c } (3) Routing CBOW Encoders We also experiment with routing a continuous bag of words ( CBOW ) encoding.",
"As routing after the main addition is just routing the classifier, we instead add a word-level transformation before the addition (Figure 3, bottom left).",
"This transformation can again be routed recursively (with up to d steps through b modules).",
"The entire CBOW model can be defined as (with w 1 , . . . , w t as premise or hypothesis): CBOWR ( w 1 , . . . , w t ) = (cid:80) ti =1 FC k 1 FC k d ( w i ) (4) Routing Transformers The Transformers model (Vaswani et al., 2017) was recently pretrained as a language model (Radford et al., 2018) achieving impressive results on several NLI datasets.",
"Since the corpus for pretraining is not available, we instead use the parameter-files distributed by the authors.",
"2 The encoding consists of twelve Transformer-blocks.",
"Each block consists of a convolutional attention layer followed by two convolutional layers (along with several dropout and layer-norm layers).",
"This allows routing at different levels of granularity, of which we investigate two: routing entire blocks and routing the attention step within each block.",
"As we use pre-defined parameters, we cannot apply the blocks or the attention layers recursively.",
"Furthermore, we have to add routing in the fine-tuning phase of using Transformers by creating b copies of each routed module (the depth is necessarily d = 12 ).",
"When fine-tuning, the router then diversifies the initially identical modules.",
"ex-2 https://github.com/openai/finetune-transformer-lm",
"periments focus on NLI , using the MULTINLI corpus (Williams et al., 2018) and a new English Corpus of Implicatives, the Stanford Corpus of Implicatives ( SCI ).",
"In both, each example is a premise/hypothesis pair labeled with one of entails , contradicts , or permits .",
"What is special about MULTINLI and SCI in the current context is that the examples also have meta-information that can be used to guide the router: a genre label for MULTINLI and an implicative signature for SCI .",
"We expect our routing models to leverage this information during policy and parameter learning.",
"For all models (except Transformers), we adopt the architecture in Figure 3, top left, in which the premise and hypothesis are processed separately, and the final representations of each are concatenated and fed into the classifier layers.",
"We explore two methods for input processing:",
"(i) pretrained word representations (GloVe; Pennington et al. 2014) and",
"(ii) pretrained contextualized representations (ELMo; Peters et al. 2018).",
"For all non-routed models, we provide explicit access to the genre label via special keywords put in front of each sentence before encoding.",
"The routed models use this label as meta-information.",
"Unless otherwise specified, we used embedding and hidden dimensions of 300.",
"The classifier consists of three fully connected layers with input and output dimensions of 600 (also when routed).",
"The final layer projects from 600 to the output dimension,",
"3. The classifier nonlinearities are ReLUs.",
"We train the modules using Adam (Kingma and Ba, 2014) with a learning rate of 1 e 3 and the router using SGD with a learning rate of 3 e 4 (for additional details, see Appendix A.5).",
"For Transformers, all hyperparameters are determined by the published parameter files.",
"In experiments with different decision making algorithms (see Appendix A.5), we found that the Gumbel-Softmax reparameterization performed 10% worse on average, with much higher variance.",
"QLearning was consistently as good as other more complex RL algorithms, so we report only our QLearning experiments.",
"As Routing Networks have more parameters than their non-routed counterparts, we ran experiments with larger non-routed networks.",
"We found that this did not affect performance, only resulting in more overfitting.",
"Table 1 : Results for MULTINLI and SCI with different baselines and their routed versions.",
"We report average accuracy with confidence intervals over five runs with different seeds.",
"For Transformers, we found that finetuning was highly volatile.",
"We therefore report test results from the best-of-5 train models.",
"All results for nested SCI were computed by fine-tuning the same network previously trained on joint.",
"WP' stands for Word Projection, +D' for Dispatching, I 2 H ' for Input-to-Hidden routing, and H 2 H ' for Hidden-to-Hidden routing.",
"Italics mark scores whose confidence intervals overlap with the best scores.",
"Figure 4 : Path (module selection) overlap for MULTINLI between genres with the CBOW GloVe WP model.",
"The diagonal represents the number of function blocks applied for a genre.",
"A maximum of three means that two genres would be routed through the exact same functions.",
"The MULTINLI corpus contains 392,702 training examples, 10K dev examples, and 10K test examples.",
"The examples come from 5 genres: fiction, government reports, the Slate website, the Switchboard corpus (Godfrey and Holliman, 1997), and Berlitz travel guides.",
"We treat these genre labels as meta-information for the model (Section 3.1).",
"Our MULTINLI results are given in Table 1, and the learning dynamics in Figure 5.",
"The best model combines the Transformer base model with routing in the attentional layers.",
"Our methods for routing the RNN seem to be less successful, but word-representation routing offers clear benefits with the CBOW base model.",
"Interestingly, the baseline (non-routed) models perform at the same level as the very similar models without genre labels evaluated by Williams et al. (2018).",
"It seems that these models are not able to take advantage of the meta-information.",
"In contrast, RRN s seem to provide the space needed to condition linguistic senses on these labels.",
"Our hypothesis is that routing will not only lead to better performance on diverse tasks like MULTINLI , but also that the paths i.e., the sequence of functions selected by the router followed by the network will reflect high-level task structure.",
"Figure 4 suggests that this is the case for MULTINLI .",
"Here we show the degree of path-overlap for all pairs of genres.",
"As we might expect, government (the 9/11 report), Slate (cur-rent affairs), and travel cluster together, as distinct from the two more isolated genres (Switch-board; spoken language) and fiction (mostly from the 20th century).",
"Karttunen (1971) discovered that implicative constructions, such as manage and waste chance (e.g. They wasted their chance to win ), have signatures , which characterize the inferences they support in positive and negative contexts.",
"This makes them the order compelled him to appear as a witness entails he appeared as a witness we have missed an opportunity to examine the art market today contradicts we have examined the art market today Mr Odinga had not been forced to change his plans permits Mr Odinga had changed his plans Table 2 : Examples from SCI randomly chosen from the validation set.",
"Each row contains a triplet formed by a premise (left column), a hypothesis (right column), and a label specifying one of the three possible relations ( entails , contradicts , permits ) holding between premise and hypothesis.",
"The last row contains an example of a probabilistic implicative (see the main text).",
"For instance, the positive sentence Joan managed to solve the problem entails Joan solved the problem , and the negative sentence Joan didn't manage to solve the problem contradicts Joan solved the problem , so we say that the verb manage has the signature +|(MacCartney and Manning, 2009; Karttunen, 2016).",
"In contrast, waste chance has the opposite signature, since they wasted the chance to befriend him contradicts they befriended him , and they didn't waste the chance to befriend him entails they befriended him .",
"There are seven implicative signatures: six were previously known (Karttunen, 1971, 2012), and we found an additional one ( +|+ ; e.g. take no time to ).",
"See Appendix C for additional details.",
"Signatures are compositional: when two or more implicative constructions are composed in a sentence, they create a nested implicative construction whose signature is determined by the signatures of the individual verbs (Nairn et al., 2006).",
"For example, John managed to remember to get the keys entails John got the keys , where the nested implicative manage to remember has the overall signature +|.",
"We also see a more limited form of compositionality inside phrasal implicatures; their signatures are often largely determined by the lexical semantic family of their constituent words.",
"Therefore, signatures make implicatives ideal for evaluating different degrees of compositional generalization with RRN s, as they provide valuable meta-information (Section 3.1).",
"Table 3 : SCI statistics.",
"Top: Percentage of validated pairs and basic agreement.",
"Bottom: Fleiss coefficients and proportion of all assignments made to the corresponding label.",
"Our SCI dataset contains 10K premise hypothesis pairs.",
"All seven signature types are represented, in addition to pragmatic signatures, which have inferential biases (see Appendix C).",
"We provide SCI 3 in three versions for all single and phrasal implicatives.",
"4 In the joint version, the underlying distribution of implicatives is shared across train, validation, and test splits.",
"In the disjoint version, a different subset of implicatives is used in train from those used in validation and test.",
"This allows us to test generalization to unseen constructions.",
"Although disjoint with respect to constructions, the constructions are carefully distributed so that all the underlying signatures and most lexical items are represented in all splits.",
"The lexical items that make up implicative constructions overlap between the splits.",
"For example, take vow appears only in training and validation, while make vow only appears in test.",
"The last version is mismatch , where different subsets of the signatures are present in training/validation and test (see appendix C for more details).",
"Data collection for SCI proceeded as follows: we collected at least 12, and most often 20, seed premises from examples found in Google Books and on the web.",
"For each example, six hypotheses were created by expert annotators.",
"5 These six examples were constructed in two steps: first, the premise was taken to be the seed sentence, and three hypotheses were created to exhaust the label space in relation to the premise.",
"Second, the premise was taken to be the negation of the seed sentence, and a different set of three hypotheses was created to also exhaust the label space with respect to this negated sentence.",
"For example, one of the seeds for manage was I managed to see who it was .",
"From this, annotators produced by hand a 3 https://nlp.stanford.edu/projects/sci/ 4 We provide nested constructions as a separate extension, with the exception of a few nested implicatives that were used in the development version of this corpus, for which the results are presented here.",
"In the final version of the corpus, single and phrasal implicatives will be separated from nested implicatives.",
"See the Appendix for more details.",
"5 Native speakers of English trained in semantics at the undergraduate level, with expertise in implicatives.",
"Figure 5 : Learning on MULTINLI with GloVe inputs.",
"Figure 6 : The path-overlap between different signatures on SCI , using the CBOW GloVe WP model, for b = 4 , d = 3",
"N + | and N | + are nested signatures.",
"negated seed, I did not manage to see who it was .",
"Then three hypotheses were generated for each of these two premises.",
"Finally, each example was labeled by the author of the hypothesis examples.",
"Validation was performed on a randomly selected subset of constructions.",
"Two additional votes were cast by different linguists, resulting in three label votes for each example in the subset.",
"The gold label was then defined to be the majority class from this set of votes.",
"The Fleiss shows high inter-annotator agreement (Table 3).",
"For each example in SCI , we use its associated implicative signature as the meta-information label.",
"This serves as a subtler kind of semantic information than genre.",
"The learning dynamics of different models are shown in Figure 7.",
"As Table 1 shows, RRN s lead to considerable gains in perfor-0 20 40 60 80 100 epochs 40 50 60 70 a cc u r a c y i n % CICBOW Glove None RNN Glove None RNN Glove CL CBOW Glove WRL Figure 7 : Learning on SCI (joint) with GloVe inputs.",
"mance over most benchmarks.",
"Recursive routing of the classifier helps consistently with all models.",
"Even simple word-level routing yields major improvements for CBOW .",
"As with MULTINLI , we believe that the modules allow the conditioning of linguistic senses.",
"Routing at the word-level for sequential models seems suboptimal here.",
"The training accuracies ( > 99 . 5% ) suggest that the problem is overfitting.",
"Intuitively, routing can assign too many exclusive parameters to each class; and as the samples within these classes are similar to be-gin with, remembering them can be easier for the network than for non-routed networks.",
"We expect examples with similar signatures to be routed along similar paths.",
"Figure 6 summarizes the path-overlap between different signatures.",
"Some similar paths involve reversals of polarity, which calls for further scrutinity.",
"However, many of these similarities make intuitive sense.",
"For example, +|.4 and +|.5 are highly similar, as are -|.7 and -|.9 .",
"In addition, the isolation of the unusual signature +|+ (limited to just two constructions) seems expected, as does the affin-ity of the nested signatures N+|and N-|+ to their unnested counterparts.",
"However, as shown in Table A3, some signatures only contain very few samples, resulting in highly noisy routing paths.",
"We now seek to provide intuitive explanation for how RRN s help in navigating the transfer interference trade-off.",
"We have characterized the trade-off in terms of controlling weight sharing between examples.",
"While baseline networks can learn to navigate this trade-off if they are optimized for the appropriate objective (Riemer et al., 2019), in general these models do not have available supervision on how to do this and so optimize greedily for the current example.",
"As such, early experiments with the baseline, non-routed models show that examples with neutral signatures, such as o|o and o|+ , are the first to be learned.",
"We noticed that baseline sequence models learned to classify take vow much earlier in the training process, as the word vow is lexicalized through training on make vow examples.",
"However, its performance decreases over time as take is lexicalized through training on examples of other implicatives with different signatures, such as take chance .",
"This behavior is the characteristic outcome of catastrophic interference: learning take chance results in decreased performance on take vow as the two examples interfere.",
"This is a particularly revealing instance as these two phrasal verbs are similar on the surface, but have quite different semantic properties, as reflected by their signatures.",
"However, given that RRN s are able to route them differently (Figure 6), interference is less likely.",
"It is also possible that routing helps with the transfer related to similar examples, an avenue we want to explore in the future.",
"We have shown that high-quality meta-information can be extremely useful.",
"Unfortunately, it is often not available at test-time.",
"To compensate, we evaluated an RRN extension in which an additional neural network module is trained to assign examples to meta-information classes (genres for MULTINLI ; signatures for SCI ).",
"When we trained this model jointly, we found that it was unstable and did not perform well.",
"However, if this model is introduced after RRN training on examples with known meta-information, then the results are extremely promising.",
"We call this Dispatcher Training (+D'), borrowing similar terminology from Rosenbaum et al. (2017).",
"Table 1 includes an initial evaluation of this variant with a CBOW base model and WP routing.",
"As we can see, accuracy actually increases by a small amount over WP alone.",
"Additionally, having +D allows the network to generalize better to unseen examples and unseen patterns.",
"Consider the relative performance drop for CBOW WP+D from the full joint SCI dataset (75.56%) to disjoint ( 0.69%) and from disjoint (74.87%) to mismatch ( 3.79%), and compare this to the plain WP version: full (74.95%) to disjoint ( 1.04%), and disjoint (73.91%) to mismatch ( 5.22%) 6 Conclusion This paper introduced Recursive Routing Networks and showed how to incorporate them into a variety of different neural architectures; we explored a range of possibilities for this, and the techniques generalize to other options straightforwardly.",
"Our evaluations focused on NLI .",
"We showed in particular that our RRN s can effectively leverage the meta-information in the MULTINLI corpus and in our new corpus focused on implicatives; not only do RRN s use this information to achieve superior accuracy, but they also learn sub-structure that reflects this high-level information, and our Dispatcher variant extends the framework to situations where the relevant meta-information is not available for testing.",
"We believe exploring more powerful variants of dispatching is an interesting avenue for future work, as is pretraining routing models on language model tasks using large corpora.",
"It is our hope that these lessons extend to other richly compositional, context-sensitive language understanding tasks.",
"We thank George Supaniratisai, Arun Chaganty, Kenny Xu and Abi See for valuable discussions, and the anonymous reviewers for their useful suggestions.",
"Clemens Rosenbaum was a recipient of an IBM PhD Fellowship while working on this publication.",
"We acknowledge the Office of the Vice Provost for Undergraduate Education at Stanford for the summer internships for Atticus Geiger, Olivia Li and Sandhini Agarwal.",
"This research is based in part upon work supported by the Stanford Data Science Initiative, by the NSF under Grant No.",
"BCS-1456077, by the NSF Award IIS-1514268, and by the Air Force Research Laboratory and DARPA under agreement number FA8750-18-2-0126.",
"The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory and DARPA or the U.S. Government."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Multilingual pretrained representations generally rely on subword segmentation algorithms to create a shared multilingual vocabulary.",
"However, standard heuristic algorithms often lead to sub-optimal segmentation, especially for languages with limited amounts of data.",
"In this paper, we take two major steps towards alleviating this problem.",
"First, we demonstrate empirically that applying existing subword regularization methods (Kudo, 2018; Provilkov et al., 2020) during fine-tuning of pre-trained multilingual representations improves the effectiveness of cross-lingual transfer.",
"Second, to take full advantage of different possible input segmentations, we propose Multi-view Subword Regularization (MVR), a method that enforces the consistency between predictions of using inputs tokenized by the standard and probabilistic segmentations.",
"Results on the XTREME multilingual benchmark (Hu et al., 2020) show that MVR brings consistent improvements of up to 2.5 points over using standard segmentation algorithms.",
"1 1 Introduction Multilingual pre-trained representations (Devlin et al., 2019; Huang et al., 2019; Conneau and Lam-ple, 2019; Conneau et al., 2020) are now an essential component of state-of-the-art methods for cross-lingual transfer (Wu and Dredze, 2019; Pires et al., 2019).",
"These methods pretrain an encoder by learning in an unsupervised way from raw textual data in up to hundreds of languages which can then be fine-tuned on annotated data of a downstream task in a high-resource language, often English, and transferred to another language.",
"In order to encode hundreds of languages with diverse vocabulary, it is standard for such multilingual models to employ a shared subword vocabulary jointly learned on the 1 Code for the method is released here: https://github.com/cindyxinyiwang/multiview-subword-regularization en excitement fr excita/tion de Auf/re/gung pt excita/o el en / j / ousi / asmc ru / Table 1: XLM-R segmentation of excitement in different languages.",
"multilingual data using heuristic word segmentation methods based on byte-pair-encoding (BPE; Sennrich et al., 2016) or unigram language models (Kudo and Richardson, 2018) (details in 2).",
"However, subword-based preprocessing can lead to sub-optimal segmentation that is inconsistent across languages, harming cross-lingual transfer performance, particularly on under-represented languages.",
"As one example, consider the segmentation of the word excitement in different languages in Tab.",
"1. The English word is not segmented, but its translations in the other languages, including the relatively high-resourced French and German, are segmented into multiple subwords.",
"Since each subword is mapped to a unique embedding vector, the segmentation discrepancywhich generally does not agree with a language's morphologycould map words from different languages to very distant representations, hurting cross-lingual transfer.",
"In fact, previous work (Conneau et al., 2020; Artetxe et al., 2020) has shown that heuristic fixes such as increasing the subword vocabulary capacity and up-sampling low-resource languages during learning of the subword segmentation can lead to significant performance improvements.",
"Despite this, there is not much work studying or improving subword segmentation methods for cross-lingual transfer.",
"Bostrom and Durrett (2020) empirically compare several popular word segmentation algorithms for pretrained language models of a single language.",
"Several works propose to use different representation granularities, such as phrase-level segmentation (Zhang and Li, 2020) or character-aware representations (Ma et al., 2020) for pretrained language models of a single high-resource language, such as English or Chinese only.",
"However, it is not a foregone conclusion that methods designed and tested on monolingual models will be immediately applicable to multilingual representations.",
"Furthermore, they add significant computation cost to the pretraining stage, which is especially problematic for multilingual pretraining on hundreds of languages.",
"The problem of suboptimal subword segmentation has drawn more attention in the context of neural machine translation (NMT).",
"Specifically, subword regularization methods have been proposed to improve the NMT model of a single language pair by randomly sampling different segmentations of the sentences during training (Kudo, 2018; Provilkov et al., 2020).",
"However, these methods have not been applied to multilingual NMT or pretrained language models and it is similarly not clear if they are useful for cross-lingual transfer.",
"In this paper, we make two contributions to close this gap.",
"First, we perform the first (to our knowledge) empirical examination of subword regularization methods on a variety of cross-lingual transfer tasks from the XTREME benchmark (Hu et al., 2020).",
"We demonstrate that despite its simplicity, this method is highly effective, providing consistent improvements across a wide variety of languages and tasks for both multilingual BERT (mBERT; Devlin et al., 2019) and XLM-R (Conneau et al., 2020) models.",
"Analysis of the results shows that this method is particularly effective for languages with non-Latin scripts despite only being applied during English fine-tuning.",
"Further, we posit that naively applying probabilistic segmentation only during fine-tuning may be sub-optimal as it creates a discrepancy between the segmentations during the pretraining and fine-tuning stages.",
"To address this problem, we propose Multi-view Subword Regularization (MVR; Fig. 1), a novel methodinspired by the usage of consistency regularization in semi-supervised learning methods (Clark et al., 2018; Xie et al., 2018)which utilizes both the standard and probabilistically segmented inputs, enforcing the model's predictions to be consistent across the two views.",
"Such consistency regularization further improves accuracy, with MVR finally demonstrating consistent gains of up to 2.5 points over the standard p ( y | x ) p ( y | x \u0000 ) Final Loss Cross Entropy Loss Prediction Consistency x * y * y * BPE BPE-dropout x x \u0000 Figure 1: Fine-tuning models using MVR on data ( x , y ) practice across all models and tasks.",
"We analyze the sources of the improvement from consistency regularization and find that it can be attributed to both label smoothing and self-ensembling.",
"Here, we first discuss two common deterministic segmentation methods based on byte pair encoding (BPE) and unigram language models (ULM), discuss their probabilistic variants, and explain how to incorporate them in training.",
"The most widely used subword segmentation methods first estimate a segmentation model from the training corpus in an unsupervised fashion.",
"They then produce a segmentation b x of the input x under the estimated segmentation model P ( x ) : b x = argmax x 2 S ( x ) P ( x ) Here S ( x ) is the set of all possible segmentations, and P ( x ) is the likelihood of a given segmentation.",
"Note that b x is deterministically selected for each input x .",
"Byte-pair encoding (BPE) The popular BPE algorithm (Sennrich et al., 2016) initializes the vocabulary with individual characters and initially represents each word as a sequence of characters.",
"It then counts the most frequent character token bigrams in the data, merges them into a new token, and adds the new token to the vocabulary.",
"This process is done iteratively until a predefined vocabulary size is reached.",
"To segment a word, BPE simply splits the word into character tokens, and iteratively merges adjacent tokens with the highest priority until no merge operation is possible.",
"That is, for an input x , it assigns segmentation probability P ( b x ) = 1 for the sequence b x obtained from the greedy merge operations, and assigns other possible segmentations a probability of",
"0. Notably, a variant of this method (Schuster and Nakajima, 2012) is used for the mBERT embedding model (Devlin et al., 2019).",
"Unigram language model (ULM) The ULM method (Kudo and Richardson, 2018) starts from a reasonably large seed vocabulary, which is iteratively pruned to maximize the training corpus likelihood under a unigram language model of the subwords until the desired vocabulary size is reached.",
"During segmentation, ULM decodes the most likely segmentation of a sentence under the estimated language model using the Viterbi algorithm.",
"This method is used in the XLM-R cross-lingual embeddings (Conneau et al., 2020).",
"As explained in 1, one drawback of both word segmentation algorithms is that they produce a deterministic segmentation for each sentence, even though multiple segmentations are possible given the same vocabulary.",
"In contrast, Kudo (2018) and Provilkov et al. (2020) have proposed methods that enable the model to generate segmentations probabilistically.",
"Instead of selecting the best subword sequence for input x , these method stochastically sample a segmentation x 0 as follows: x 0 P 0 ( x ) where P 0 ( x ) / P ( x ) if x 2 S ( x ) 0 otherwise Here we briefly introduce these two methods.",
"BPE-dropout This method is used together with the BPE algorithm, randomly dropping merge operations with a given probability p while segmenting the input data (Provilkov et al., 2020).",
"ULM-sample As the ULM algorithm relies on a language model to score segmentation candidates for picking the most likely segmentation, Kudo (2018) propose to sample from these segmentation candidates based on their language model scores.",
"Subword regularization (Kudo, 2018) is a method that incorporates probabilistic segmentation at training time to improve the robustness of models to different segmentations.",
"The idea is conceptually simple: at training time sample different segmentations x 0 for each input sentence x .",
"Previous works (Kudo, 2018; Provilkov et al., 2020) have demonstrated that subword regularization using both BPE-dropout and ULM-sampling are effective at improving machine translation accuracy, 1 2 3 4 5 6 7 8 9 > =10 Number of subwords 0 25 50 75 P e r ce n t o f t o t a l w o r d s mykabn arruen Figure 2: Percentage of words with different number of segments from different languages.",
"particularly in cross-domain transfer settings where the model is tested on a different domain than the one on which it is trained.",
"While sub-optimal word segmentation is a challenge in monolingual models, it is an even bigger challenge for multilingual pretrained models.",
"These models train a shared subword segmentation model jointly on data from many languages, but the segmentation can nonetheless be different across languages, stemming from two main issues.",
"First, the granularity of segmentation differs among languages, where the segmentation model tends to over-segment low-resource languages that do not have enough representation in the joint training data (cs, 2019).",
"Fig. 2 shows the distribution of words from languages from different language families based on the number of subwords they are split into.",
"2 We can see that the majority of English words are not segmented at all, while many languages only have less than half of the words unsegmented.",
"Notably, even though Burmese (my) is a language with little inflectional morphology, almost a quarter of the words are segmented into more than nine subwords.",
"Second, the segmentation might still be inconsistent between different languages even if the granularity is similar, as explained in Tab.",
"1. For example, neither the English word excitement nor the same word in French excita/tion are overly segmented, but segmenting the English word into excite/ment would allow the model to learn a better cross-lingual alignment.",
"Despite these issues, few methods have tried to address this subword segmentation problem for multilingual pretrained models.",
"Chau et al. (2020) 2 We use Pan et al. (2017)'s named entity recognition test data with mBERT's tokenizer.",
"propose to adapt a pretrained multilingual model to a new language by augmenting the vocabulary with a new subword vocabulary learned on the target language, but this method might not help for languages other than the target language it adapts to.",
"Chung et al. (2020) propose to separately construct a subword segmentation model for each cluster of related languages for pretraining the multilingual representations.",
"However, directly modifying the word segmentation requires retraining large pretrained models, which is computationally prohibitive in most cases.",
"In this paper, we instead propose a more efficient approach of using probabilistic segmentation during fine-tuning on labeled data of a downstream task.",
"As mismatch in segmentation is one of the factors harming cross-lingual transfer, we expect a model that becomes more robust to different varieties of segmentation in one language will be more accommodating to differing segmentations in other languages during inference.",
"Despite the simplicity of this method it is, as far as we are aware, unattested in the literature, and we verify in 5.3 that it significantly improves the cross-lingual transfer performance of multilingual pretrained models.",
"Previous attempts at SR have mainly applied it to models trained from scratch for tasks such as MT. However, the situation is somewhat different when fine-tuning pre-trained representations, in which case the original pre-trained models are generally not trained on sampled segmentations.",
"This discrepancy between the segmentation of the English labeled data and the segmentation of English monolingual data during pretraining might hurt the ability of the model to take full advantage of the parameters learned during the pretraining stage.",
"To reduce this pretrainingfine-tuning discrepancy, we propose Multi-view Subword Regularization (MVR), a method for learning from multiple segmented versions of the same data and enforcing the consistency of predictions over different segmentations.",
"Given the input b x i tokenized with the deterministic segmentation such as BPE, and x 0 i , the same input tokenized with the corresponding probabilistic segmentation algorithm such as BPE-dropout, the objective for MVR has three components J ( ) = n X i =1 \u0000 1 2 log p ( y i | b x i ) | {z } Det. Seg CrossEnt \u0000 1 2 log p ( y i | x 0 i ) | {z } Prob.",
"Seg CrossEnt + \u0000 D ( p ( y i | b x i ) || p ( y i | x 0 i )) | {z } Consistency loss (1)",
"1. A cross-entropy loss using the standard deterministic segmentation.",
"This loss acts on data whose segmentation is consistent with the segmentation seen during pretraining.",
"It thus maximizes the benefit of pretrained representations.",
"2. A cross entropy loss using probabilistic segmentation.",
"It allows the model to learn from different possible segmentations of the same input.",
"3. A distance term D ( || ) between the model prediction distributions over the two different versions of the input.",
"We use KL divergence as the distance metric and a hyperparameter \u0000 to balance the supervised cross-entropy losses and the consistency loss.",
"Minimizing the distance between the two distributions enforces the model to make consistent predictions under different input segmentations, making it robust to sub-optimal segmentation of multilingual data.",
"3 Flattening the prediction The benefit of consistency regularization might be limited if the model prediction becomes overly confident on certain classes, especially when the number of output classes is large.",
"Inspired by a similar technique in knowledge distillation (Hinton et al., 2014), we can use a softmax temperature to flatten the prediction distribution when calculating the consistency loss.",
"Specifically, the distance loss between two prediction distributions in Eq.",
"1 can be written as D ( p flat ( y i | b x i ) || p ( y i | x 0 i )) , where p flat ( y i | b x i ) = exp ( z y ) / P y 0 exp ( z y 0 ) / (2) and z y is the logit for output label y i .",
"Normally is set to 1, and a higher makes the probability distribution more evenly distributed over all classes.",
"In our experiments, we find that = 1 works well for most of the tasks and = 2 works slightly better for tasks that have larger output label spaces.",
"3 As in semi-supervised learning (Clark et al., 2018), we expect our method to also be effective when applied to unlabeled data, e.g. using target language adaptation (Pfeiffer et al., 2020), which we leave for future work.",
"Efficiency At inference time, we simply use the model prediction based on the input tokenized by deterministic segmentation only.",
"Therefore, our method does not add additional decoding latency.",
"MVR needs about twice the fine-tuning cost compared to the baseline.",
"However, compared to pretraining and inference usage of a model, fine-tuning is generally the least expensive component.",
"We evaluate the multilingual representations using tasks from the XTREME benchmark (Hu et al., 2020), focusing on the zero-shot cross-lingual transfer with English as the source language.",
"We consider sentence classification tasks including XNLI (Conneau et al., 2018) and PAWS-X (Yang et al., 2019), a structured prediction task of multilingual NER (Pan et al., 2017), and question-answering tasks including XQuAD (Artetxe et al., 2020) and MLQA (Lewis et al., 2020).",
"We evaluate on both the mBERT model which utilizes BPE to tokenize the inputs, and the XLM-R models which uses ULM segmentation.",
"To replicate the baseline, we follow the hyperparameters provided in the XTREME codebase 4 .",
"Models are fine-tuned on English training data and zero-shot transferred to other languages.",
"We run each experiment with 5 random seeds and record the average results and the standard deviation.",
"SR We use BPE-dropout (Provilkov et al., 2020) for mBERT and ULM-sample (Kudo, 2018) for XLM-R models to do probabilistic segmentation of the English labeled data.",
"BPE-dropout sets a dropout probability of p 2 [0 , 1] for the merge operations, where a higher p corresponds to stronger regularization.",
"ULM-sample utilizes a sampling temperature 2 [0 , 1] to scale the scores for segmentation candidates, and a lower leads to stronger regularization.",
"We select the p and values based on the model performance on the English dev set of the NER task and simply use the same values across all other tasks.",
"We set p = 0 .",
"1 for BPE-dropout and = 0 .",
"6 for ULM-sample.",
"MVR We select the hyperparameters for MVR using the English dev set performance 4 https://github.com/google-research/ xtreme on the NER task.",
"MVR works slightly better by using stronger regularization than SR, likely because using inputs deterministically segmented by the standard algorithm can balance the negative impact of bad tokenization by sampling from a more diverse set of segmentation candidates.",
"We use \u0000 = 0 .",
"2 , p = 0 .",
"2 for mBERT and \u0000 = 0 .",
"6 , = 0 .",
"2 for XLM-R.",
"We use prediction temperature = 2 for the question-answering tasks XQuAD and MLQA for the XLMR mdoels, and simply use = 1 for all other tasks.",
"Further analysis of hyperparameters on the performance of MVR can be found in A.1.",
"We compare performance of SR, MVR and the baseline for all models in Tab.",
"2, focusing on the average performance on all languages for each task.",
"Our baseline numbers match or exceed the benchmark results in Hu et al. (2020) for both mBERT and XLM-R large (Hu et al. (2020) do not include results for XLM-R base) on almost all tasks.",
"Applying SR on English significantly improves other languages SR is surprisingly effective for mBERTit is comparable to the baseline on XNLI and significantly improves over the baseline for the rest of the four tasks.",
"However, the gains are less consistent for XLM-R models.",
"For both XLM-R base and large, SR leads to improvements on the NER task and the PAWS-X classification task, but is mostly comparable to the baseline for the rest of the three tasks.",
"SR performs better for mBERT likely because the vocabulary of mBERT is more imbalanced than that of XLM-R; it thus benefits more from the regularization methods.",
"mBERT relies on BPE, which could be worse than ULM at tokenizing subwords into morphologically meaningful units (Bostrom and Durrett, 2020).",
"Furthermore, mBERT has only 100K words in the vocabulary while XLM-R has a much larger vocabulary of 250K.",
"MVR consistently improves over SR For mBERT, it leads to improvements of over 1 to 2 points over the baseline for all tasks.",
"It is also very effective for the XLM-R models.",
"For both the XLM-R base and the stronger XLM-R large models, MVR improves over 1 point over the baseline on the NER task and the two classification tasks.",
"On the question-answering tasks, MVR delivers strong improvements for the XLM-R base model while the improvements on the XLM-R Model Method Avg.",
"large model is slightly smaller.",
"It has around 0.5 point improvement on XQuAD and has the same performance on MLQA.",
"MVR leads to more improvements on XQuAD, probably because it has a more diverse set of languages that potentially have more sub-optimal subword segmentation.",
"The consistent gains on both mBERT and XLM-R show that MVR is a general and flexible method for a variety of pretrained multilingual models based on different segmentation methods.",
"In this section, we verify the effectiveness of the three loss components in MVR by removing each of them from the objective.",
"The ablation results on mBERT for all tasks are listed in Tab.",
"3. Removing any of the three loss components hurts the model performance by about the same amount for most of the tasks.",
"For the question answering tasks, however, removing the cross-entropy loss on the deterministically segmented inputs reduces the model performance by almost half.",
"This is likely because under this setting, the model only learns to locate exact spans for inputs tokenized by BPE-dropout, while we use the standard BPE to segment the inputs at test time.",
"In this section, we perform several analyses to better understand the behavior and root causes of the accuracy gains realized by our method.",
"In this section, we analyze the effect of our methods on languages and words with different subword segmentation granularity.",
"We focus on the NER task because it contains a diverse set of over 40 languages.",
"We calculate the average number of subword pieces in a language, and plot the gains over the baseline for these languages with respect to their average subwords in Fig.",
"3. To visualize the relationship between the two values, we also fit a trend line and record its coefficient for each method in the legend.",
"We consider three methods for mBERT: SR, MVR without consistency loss, and the full MVR.",
"The trend line for MVR has a positive coefficient, indicating that it improves more on languages that are more overly segmented.",
"Removing the consistency loss tends to hurt more for these languages.",
"SR, on the other hand, does not tend to favor languages with more subword segmentation.",
"Next, we bucket all the words together based on how many subwords they are segmented into, and compare the performance of our methods for each word bucket.",
"We use the XLM-R model and plot the results in Fig.",
"4. SR brings slightly more improvements on average for words that are split into 4 or more pieces for the large model.",
"MVR outperforms SR for all categories, especially for difficult words that are segmented into 5 or more subwords.",
"Gains on Latin vs. non-Latin script In addition, it is notable that we fine-tune the model using labeled data from English, a Latin script language, while the non-Latin scripted languages might have larger segmentation and vocabulary discrepancies from English.",
"We thus also plot the score improvements of both SR and MVR over the baseline for languages with and without Latin script in Fig. 6.",
"We use a lighter shade to represent improvements for Latin-script languages and a darker shade for languages with non-Latin scripts.",
"Across all the Method Avg.",
"tasks, both SR and MVR generally have larger improvements on languages with non-Latin script.",
"MVR, which is represented by blue shades, generally outperforms SR for both the Latin and non-Latin scripted languages across all models.",
"While SR sometimes underperforms the baseline on Latin scripted languages, especially for XLM-R models, MVR delivers consistent improvements over the baseline across both types of languages.",
"Overall, MVR achieves the largest improvements over SR for languages with non-Latin scripts.",
"One of the novel components of MVR is the consistency loss between two different segmentations of the input.",
"In this section we analyze two hypotheses about the source of benefit provided thereby.",
"Label smoothing The first hypothesis is that the consistency loss may be able to mitigate over-confident predictions by calibrating the two output distributions against each other.",
"This effect is similar to label smoothing (Szegedy et al., 2015; Yuan et al., 2020), which softens the one-hot target label by adding a loss of uniform distribution over all class labels and has proven helpful across a wide variety of models.",
"To measure this, we plot the F1 improvement on the NER task for examples categorized by increasing predictive entropy in Fig. 5.",
"MVR leads to more improvements on examples with higher entropy, or those that the model is more uncertain about, indicating that MVR is indeed helping the model improve on examples where it is not confident.",
"Ensemble effect The second hypothesis is that the consistency loss could regularize the model to be closer to the ensemble of models trained on standard deterministically segmented inputs and probabilistically segmented inputs.",
"To verify this hypothesis, we first calculate the ensembled prediction probability of the baseline and the SR models for each language.",
"Then we compare the KL divergence between this ensemble distribution and MVR with or without the consistency loss.",
"In Fig. 7, we plot this KL divergence difference between the MVR without consistency loss and the full MVR for each language in NER.",
"For most of the languages, the full MVR has lower KL divergence with the ensemble distribution, which indicates that the consistency loss trains the model to be closer to the ensemble of two inputs.",
"Although SR improves the model performance averaged over all languages, surprisingly it can hurt the performance on English, the language we use for fine-tuning.",
"Fig. 8 shows the improvement on English over the baseline for both SR and MVR, and notably English performance decreases for all tasks on mBERT.",
"MVR, on the other hand, generally brings improvements for English across both mBERT and XLM-R large models.",
"This is likely because MVR also utilizes English inputs with standard segmentation, the method used at pretraining time, which allows it to take full advantage of the information encoded during pretraining.",
"Several works propose to optimize subword-sensitive word encoding methods for pretrained language models.",
"Ma et al. (2020) uses convolutional neural networks (Kim, 2014) on characters to calculate word representations.",
"Zhang and Li (2020) propose to add phrases into the vocabulary for Chinese pretrained language models.",
"However, they focus on improving the vocabulary of pretrained representations of a single language, and they require modification to the model pretraining stage.",
"Chung et al. (2020) propose to cluster related languages together and run subword vocabulary construction on each language cluster when constructing vocabularies for mBERT.",
"Their method is also applied at the pretraining stage and could be combined with our method for potential additional improvements.",
"Our method is also related to prior work that optimize word representations for NMT and language modeling.",
"Character level embeddings have been utilized instead of subword segmentation for NMT (Cherry et al., 2018; Lee et al., 2017; Ataman and Federico, 2018) and language modeling (Kim et al., 2016; Jzefowicz et al., 2016).",
"Wang et al. (2019) propose a multilingual word embedding method for NMT that relies on character n-gram embedding and a latent semantic embedding shared between different languages.",
"Ataman and Federico (2018) show that character n-gram based embedding performs better than BPE for morphologically rich languages.",
"He et al. (2020) propose to learn the optimal segmentation given a subword vocabulary for NMT.",
"Our method is inspired by semi-supervised learning methods that enforce model consistency on unlabeled data.",
"Several self-training methods utilize unlabeled examples to minimize the distance between the model predictions based on the unlabeled example and a noised version of the same input (Miyato et al., 2017b,a; Xu and Yang, 2017; Clark et al., 2018; Xie et al., 2018).",
"Xu and Yang (2017) use knowledge distillation on unlabeled data to adapt models to a new language.",
"Clark et al. (2018) propose to mask out different parts of the unlabeled input and encourage the model to make consistent prediction given these different inputs.",
"These methods all focus on semi-supervised learning, while our method regulates model consistency to mitigate the subword segmentation discrepancy between different languages.",
"We believe that the results in this paper convincingly demonstrate that standard deterministic subword segmentation is sub-optimal for multilingual pretrained representations.",
"Even incorporating simple methods for subword regularization such as BPE-dropout at fine-tuning can improve the crosslingual transfer of pretrained models, and our proposed Multi-view Subword Regularization method further shows consistent and strong improvements over a variety of tasks for models built upon different subword segmentation algorithms.",
"Going forward, we suggest that some variety of subword regularization, MVR or otherwise, should be a standard component of the fine-tuning of pre-trained representations that use subword segmentation.",
"The first author XW is supported by the Apple PhD fellowship.",
"This project is made possible by the computing resources from the Pittsburgh Super-computing Center.",
"The authors would like to thank Adhi Kuncoro for his comments regarding the draft of the paper."
] | [
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"objective",
"objective",
"method",
"other",
"other",
"other"
] |
[
"How does the input segmentation of pretrained language models (PLMs) affect their interpretations of complex words?",
"We present the first study investigating this question, taking BERT as the example PLM and focusing on its semantic representations of English derivatives.",
"We show that PLMs can be interpreted as serial dual-route models, i.e., the meanings of complex words are either stored or else need to be computed from the subwords, which implies that maximally meaningful input tokens should allow for the best generalization on new words.",
"This hypothesis is confirmed by a series of semantic probing tasks on which DelBERT (Derivation leveraging BERT), a model with derivational input segmentation, substantially outperforms BERT with WordPiece segmentation.",
"Our results suggest that the generalization capabilities of PLMs could be further improved if a morphologically-informed vocabulary of input tokens were used.",
"Pretrained language models (PLMs) such as BERT (Devlin et al., 2019), GPT-2 (Radford et al., 2019), XLNet (Yang et al., 2019), ELECTRA (Clark et al., 2020), and T5 (Raffel et al., 2020) have yielded substantial improvements on a range of NLP tasks.",
"What linguistic properties do they have?",
"Various studies have tried to illuminate this question, with a focus on syntax (Hewitt and Manning, 2019; Jawa-har et al., 2019) and semantics (Ethayarajh, 2019; Ettinger, 2020; Vulic et al., 2020).",
"One common characteristic of PLMs is their input segmentation: PLMs are based on fixed-size vocabularies of words and subwords that are generated by compression algorithms such as byte-pair encoding (Gage, 1994; Sennrich et al., 2016) and WordPiece (Schuster and Nakajima, 2012; Wu et al., 2016).",
"The segmentations produced by these s w x y superbizarre neg applausive pos ##iza superb ##rre BERT p ( y | s w ( x )) = .",
"algorithms are linguistically questionable at times (Church, 2020), which has been shown to worsen performance on certain downstream tasks (Bostrom and Durrett, 2020; Hofmann et al., 2020a).",
"However, the wider implications of these findings, particularly with regard to the generalization capabilities of PLMs, are still poorly understood.",
"Here, we address a central aspect of this issue, namely how the input segmentation affects the semantic representations of PLMs, taking BERT as the example PLM.",
"We focus on derivationally complex words such as superbizarre since they exhibit systematic patterns on the lexical level, providing an ideal testbed for linguistic generalization.",
"At the same time, the fact that low-frequency and out-of-vocabulary words are often derivationally complex (Baayen and Lieber, 1991) makes our work relevant in practical settings, especially when many one-word expressions are involved, e.g., in query processing (Kacprzak et al., 2017).",
"The topic of this paper is related to the more fundamental question of how PLMs represent the meaning of complex words in the first place.",
"So far, most studies have focused on methods of representation extraction, using ad-hoc heuristics such as averaging the subword embeddings (Pinter et al., 2020; Sia et al., 2020; Vulic et al., 2020) or taking the first subword embedding (Devlin et al., 2019; Heinzerling and Strube, 2019; Martin et al., 2020).",
"While not resolving the issue, we lay the theoretical groundwork for more systematic analyses by showing that PLMs can be regarded as serial dual-route models (Caramazza et al., 1988), i.e., the meanings of complex words are either stored or else need to be computed from the subwords.",
"Contributions.",
"We present the first study examining how the input segmentation of PLMs, specifically BERT, affects their interpretations of derivationally complex English words.",
"We show that PLMs can be interpreted as serial dual-route models, which implies that maximally meaningful input tokens should allow for the best generalization on new words.",
"This hypothesis is confirmed by a series of semantic probing tasks on which derivational segmentation substantially outperforms BERT's WordPiece segmentation.",
"This suggests that the generalization capabilities of PLMs could be further improved if a morphologically-informed vocabulary of input tokens were used.",
"We also publish three large datasets of derivationally complex words with corresponding semantic properties.",
"1 2 How Are Complex Words Processed?",
"The question of how complex words are processed has been at the center of psycholinguistic research over the last decades (see Leminen et al. (2019) for a recent review).",
"Two basic processing mechanisms have been proposed: storage , where the meaning of complex words is listed in the mental lexicon (Manelis and Tharp, 1977; Butterworth, 1983; Feldman and Fowler, 1987; Bybee, 1988; Stemberger, 1994; Bybee, 1995; Bertram et al., 2000a), and computation , where the meaning of complex words is inferred based on the meaning of stem and affixes (Taft and Forster, 1975; Taft, 1979, 1981, 1988, 1991, 1994; Rastle et al., 2004; Taft, 2004; Rastle and Davis, 2008).",
"In contrasting with single-route frameworks, dual-route models allow for a combination of storage and computation.",
"Dual-route models are further classified by whether they regard the processes of retrieving meaning from the mental lexicon and computing meaning based on stem and affixes as parallel , i.e., both mechanisms are always activated (Frauenfelder and Schreuder, 1992; Schreuder and Baayen, 1995; Baayen et al., 1997, 2000; Bertram et al., 2000b; New et al., 2004; Kuperman et al., 2008, 2009), or serial , i.e., the computation-based mechanism is only activated when the storage-based one fails (Laudanna and Burani, 1985; Burani and Caramazza, 1987; Caramazza et al., 1988; Burani and Laudanna, 1992; Laudanna and Burani, 1995; Alegre and Gordon, 1999).",
"Outside the taxonomy presented so far are recent models that assume multiple levels of representation as well as various forms of interaction between them (Rcz et al., 2015; Needle and Pierrehumbert, 2018).",
"In these models, sufficiently frequent complex words are stored together with representations that include their internal structure.",
"Complex-word processing is driven by analogical processes over the mental lexicon (Rcz et al., 2020).",
"Most models of word meaning proposed in NLP can be roughly assigned to either the single-route or dual-route approach.",
"Word embeddings that represent complex words as whole-word vectors (Deerwester et al., 1990; Mikolov et al., 2013a,b; Pennington et al., 2014) can be seen as single-route storage models.",
"Word embeddings that represent complex words as a function of subword or morpheme vectors (Schtze, 1992; Luong et al., 2013) can be seen as single-route computation models.",
"Finally, word embeddings that represent complex words as a function of subword or morpheme vectors as well as whole-word vectors (Botha and Blun-som, 2014; Qiu et al., 2014; Bhatia et al., 2016; Bojanowski et al., 2017; Athiwaratkun et al., 2018; Salle and Villavicencio, 2018) are most closely related to parallel dual-route approaches.",
"Where are PLMs to be located in this taxonomy?",
"PLMs represent many complex words as whole-word vectors (which are fully stored).",
"Similarly to how character-based models represent word meaning (Kim et al., 2016; Adel et al., 2017), they can also store the meaning of frequent complex words that are segmented into subwords, i.e., frequent subword collocations, in their model weights.",
"When the complex-word meaning is neither stored as a whole-word vector nor in the model weights, PLMs compute the meaning as a compositional function of the subwords.",
"Conceptually, PLMs can thus be interpreted as serial dual-route models.",
"While the parallelism has not been observed before, it follows logically from the structure of PLMs.",
"The key goal of this paper is to show that the implications of this observation are borne out empirically.",
"As a concrete example, consider the complex words stabilize , realize , finalize , mobilize , tribalize , and templatize , which are all formed by adding the verbal suffix ize to a nominal or adjectival stem.",
"Taking BERT, specifically BERTBASE (uncased) (De-vlin et al., 2019), as the example PLM, the words stabilize and realize have individual tokens in the input vocabulary and are hence associated with whole-word vectors storing their meanings, including highly lexicalized meanings as in the case of realize .",
"By contrast, the words finalize and mobilize are segmented into final , ##ize and mob , ##ili , ##ze , which entails that their meanings are not stored as whole-word vectors.",
"However, both words have relatively high absolute frequencies of 2,540 ( finalize ) and 6,904 ( mobilize ) in the English Wikipedia, the main dataset used to pretrain BERT (Devlin et al., 2019), which means that BERT can store their meanings in its model weights during pretraining.",
"2 Notice this is even possible in the case of highly lexicalized meanings as for mobilize .",
"Finally, the words tribalize and templatize are segmented into tribal , ##ize and te , ##mp , ##lat , ##ize , but as opposed to finalize and mobilize they do not occur in the English Wikipedia.",
"As a result, BERT cannot store their meanings in its model weights during pretraining and needs to compute them from the meanings of the subwords.",
"Seeing PLMs as serial dual-route models allows for a more nuanced view on the central research question of this paper: in order to investigate semantic generalization we need to investigate the representations of those complex words that activate the computation-based route.",
"The words that do so are the ones whose meaning is neither stored as a whole-word vector nor in the model weights 2 Previous research suggests that such lexical knowledge is stored in the lower layers of BERT (Vulic et al., 2020).",
"and hence needs to be computed compositionally as a function of the subwords ( tribalize and templatize in the discussed examples).",
"We hypothesize that the morphological validity of the segmentation affects the representational quality in these cases, and that the best generalization is achieved by maximally meaningful tokens.",
"It is crucial to note this does not imply that the tokens have to be morphemes, but the segmentation boundaries need to coincide with morphological boundaries, i.e., groups of morphemes (e.g., tribal in the segmentation of tribalize ) are also possible.",
"3 For tribalize and templatize , we therefore expect the segmentation tribal , ##ize (morphologically valid since all segmentation boundaries are morpheme boundaries) to result in a representation of higher quality than the segmentation te , ##mp , ##lat , ##ize (morpho-logically invalid since the boundaries between te , ##mp , and ##lat are not morpheme boundaries).",
"On the other hand, complex words whose meanings are stored in the model weights ( finalize and mobilize in the discussed examples) are expected to be affected by the segmentation to a much lesser extent: if the meaning of a complex word is stored in the model weights, it should matter less whether the specific segmentation activating that meaning is morphologically valid ( final , ##ize ) or not ( mob , ##ili , ##ze ).",
"4 3 Experiments 3.1 Setup Analyzing the impact of different segmentations on BERT's semantic generalization capabilities is not straightforward since it is not clear a priori how to measure the quality of representations.",
"Here, we devise a novel lexical-semantic probing task: we use BERT's representations for complex words to predict semantic dimensions, specifically sentiment and topicality (see Figure 1).",
"For sentiment, given the example complex word superbizarre , the task is to predict that its sentiment is negative.",
"For topicality, given the example complex word isotopize , the task is to predict that it is used in physics.",
"We confine ourselves to binary predic-3 This is in line with substantial evidence from linguistics showing that frequent groups of morphemes can be treated as semantic wholes (Stump, 2017, 2019).",
"4 We expect the distinction between storage and computation of complex-word meaning for PLMs to be a continuum.",
"While the findings presented here are consistent with this view, we defer a more in-depth analysis to future work.",
"tion, i.e., the probed semantic dimensions always consist of two classes (e.g., positive and negative).",
"The extent to which a segmentation supports a solution of this task is taken as an indicator of its representational quality.",
"More formally, let D be a dataset consisting of complex words x and corresponding classes y that instantiate a certain semantic dimension (e.g., sen-timent).",
"We denote with s ( x ) = ( t 1 , . . . , t k ) the segmentation of x into a sequence of k subwords.",
"We ask how s impacts the capability of BERT to predict y , i.e., how p ( y | ( s ( x )) , the likelihood of the true semantic class y given a certain segmentation of x , depends on different choices for s .",
"The two segmentation methods we compare in this study are BERT's standard WordPiece segmentation (Schus-ter and Nakajima, 2012; Wu et al., 2016), s w , and a derivational segmentation that segments complex words into stems and affixes, s d .",
"Since existing datasets do not allow us to conduct experiments following the described setup, we create new datasets in a weakly-supervised fashion that is conceptually similar to the method proposed by Mintz et al. (2009): we employ large datasets annotated for sentiment or topicality, extract derivationally complex words, and use the dataset labels to establish their semantic classes.",
"For determining and segmenting derivationally complex words, we use the algorithm introduced by Hofmann et al. (2020b), which takes as input a set of prefixes, suffixes, and stems and checks for each word in the data whether it can be derived from a stem using a combination of prefixes and suffixes.",
"5 The algorithm is sensitive to morpho-orthographic rules of English (Plag, 2003), e.g., when the suf-5 The distinction between inflectionally and derivationally complex words is notoriously fuzzy (Haspelmath and Sims, 2010; ten Hacken, 2014).",
"We try to exclude inflection as far as possible (e.g., by removing problematic affixes such as ing ) but are aware that a clear separation does not exist.",
"fix ize is removed from isotopize , the result is isotope , not isotop .",
"We follow Hofmann et al. (2020a) in using the prefixes, suffixes, and stems in BERT's WordPiece vocabulary as input to the algorithm.",
"This means that all tokens used by the derivational segmentation are in principle also available to the WordPiece segmentation, i.e., the difference between s w and s d does not lie in the vocabulary per se but rather in the way the vocabulary is used.",
"See Appendix A.1 for details about the derivational segmentation.",
"To get the semantic classes, we compute for each complex word which fraction of texts containing the word belongs to one of two predefined sets of dataset labels (e.g., reviews with four and five stars for positive sentiment) and rank all words accordingly.",
"We then take the first and third tertiles of complex words as representing the two classes.",
"We randomly split the words into 60% training, 20% development, and 20% test.",
"In the following, we describe the characteristics of the three datasets in greater depth.",
"Table 1 provides summary statistics.",
"See Appendix A.2 for details about data preprocessing.",
"Amazon.",
"Amazon is an online e-commerce platform.",
"A large dataset of Amazon reviews has been made publicly available (Ni et al., 2019).",
"6 We extract derivationally complex words from reviews with one or two ( neg ) as well as four or five stars ( pos ), discarding three-star reviews for a clearer separation (Yang and Eisenstein, 2017).",
"ArXiv.",
"ArXiv is an open-access distribution ser-vice for scientific articles.",
"Recently, a dataset of all papers published on ArXiv with associated meta-data has been released.",
"7 For this study, we extract all articles from physics ( phys ) and computer science ( cs ), which we identify using ArXiv's subject classification.",
"We choose physics and computer 6 https://nijianmo.github.io/amazon/ index.html 7 https://www.kaggle.com/ Cornell-University/arxiv Amazon ArXiv Reddit Model Dev Test Dev Test Dev Test DelBERT .635 .001 .639 .002 .731 .001 .723 .001 .696 .001 .701 .001 BERT .619 .001 .624 .001 .704 .001 .700 .002 .664 .001 .664 .003 Stem .572 .003 .573 .003 .705 .001 .697 .001 .679 .001 .684 .002 Affixes .536 .008 .539 .008 .605 .001 .603 .002 .596 .001 .596 .001 Table 2: Results.",
"science since we expect large topical distances for these classes (compared to alternatives such as mathematics and computer science).",
"Reddit.",
"Reddit is a social media platform hosting discussions about various topics.",
"It is divided into smaller communities, so-called subreddits, which have been shown to be a rich source of derivationally complex words (Hofmann et al., 2020c).",
"Hofmann et al. (2020a) have published a dataset of derivatives found on Reddit annotated with the subreddits in which they occur.",
"8 Inspired by a content-based subreddit categorization scheme, 9 we define two groups of subreddits, an entertainment set ( ent ) consisting of the subreddits anime , DestinyTheGame , funny , Games , gaming , leagueoflegends , movies , Music , pics , and videos , as well as a discussion set ( dis ) consisting of the subred-8 https://github.com/valentinhofmann/ dagobert 9 https://www.reddit.com/r/ TheoryOfReddit/comments/1f7hqc/the_200_most_active_subreddits_categorized_by dits askscience , atheism , conspiracy , news , Libertarian , politics , science , technology , TwoXChromosomes , and worldnews , and extract all derivationally complex words occurring in them.",
"We again expect large topical distances for these classes.",
"Given that the automatic creation of the datasets necessarily introduces noise, we measure human performance on 100 randomly sampled words per dataset, which ranges between 71% (Amazon) and 78% (ArXiv).",
"These values can thus be seen as an upper bound on performance.",
"We train two main models on each binary classi-fication task: BERT with the standard WordPiece segmentation ( s w ) and BERT using the derivational segmentation ( s d ), a model that we refer to as DelBERT ( De rivation l everaging BERT ).",
"BERT and DelBERT are identical except for the way in which they use the vocabulary of input tokens (but the vocabulary itself is also identical for both models).",
"The specific BERT variant we use is BERTBASE (uncased) (Devlin et al., 2019).",
"For the derivational segmentation, we follow previous work by Hofmann et al. (2020a) in separating stem and prefixes by a hyphen.",
"We further follow Casanueva et al. (2020) and Vulic et al. (2020) in mean-pooling the output representations for all subwords, excluding BERT's special tokens.",
"The mean-pooled representation is then fed into a two-layer feed-forward network for classification.",
"To examine the relative importance of different types of morphological units, we train two additional models in which we ablate information about stems and affixes, i.e., we represent stems and affixes by the same randomly chosen input embedding.",
"10 We finetune BERT, DelBERT, and the two ablated models on the three datasets using 20 different random seeds.",
"We choose F1 as the evaluation measure.",
"See Appendix A.3 for details about implementation and hyperparameters.",
"DelBERT ( s d ) outperforms BERT ( s w ) by a large margin on all three datasets (Table 2).",
"It is interesting to notice that the performance difference is larger for ArXiv and Reddit than for Amazon, indicating that the gains in representational quality are particularly large for topicality.",
"What is it that leads to DelBERT's increased performance?",
"The ablation study shows that models using only stem information already achieve relatively high performance and are on par or even better than the BERT models on ArXiv and Reddit.",
"However, the DelBERT models still perform substantially better than the stem models on all three datasets.",
"The gap is particularly pronounced 10 For affix ablation, we use two different input embeddings for prefixes and suffixes.",
"for Amazon, which indicates that the interaction between the meaning of stem and affixes is more complex for sentiment than for topicality.",
"This makes sense from a linguistic point of view: while stems tend to be good cues for the topical associations of a complex word, sentiment often depends on semantic interactions between stems and affixes.",
"For example, while the prefix un turns the sentiment of amusing negative, it turns the sentiment of biased positive.",
"Such effects involving negation and antonymy are known to be challenging for PLMs (Ettinger, 2020; Kassner and Schtze, 2020) and might be one of the reasons for the generally lower performance on Amazon.",
"11 The performance of models using only affixes is much lower.",
"To further examine how BERT ( s w ) and DelBERT ( s d ) differ in the way they infer the meaning of complex words, we perform a convergence analysis.",
"We find that the DelBERT models reach their peak in performance faster than the BERT models (Figure 2).",
"This is in line with our interpretation of PLMs as serial dual-route models (see Section 2.2): while DelBERT operates on morphological units and can combine the subword meanings to infer the meanings of complex words, BERT's subwords do not necessarily carry lexical meanings, and hence the derivational patterns need to be stored by adapting the model weights.",
"This is an additional burden, leading to longer convergence times and substantially worse overall performance.",
"11 Another reason for the lower performance on sentiment is that the datasets were created automatically (see Section 3.2), and hence many complex words do not directly carry information about sentiment or topicality.",
"The density of such words is higher for sentiment than topicality since the topic of discussion affects the likelihoods of most content words.",
"to process complex words (storage in weights and compositional computation based on input embed-dings), and that the second route is blocked when the input segmentation is not morphological, suggests the existence of frequency effects: BERT might have seen frequent complex words multiple times during pretraining and stored their meaning in the model weights.",
"This is less likely for infrequent complex words, making the capability to compositionally infer the meaning (i.e., the computation route) more important.",
"We therefore expect the difference in performance between DelBERT (which should have an advantage on the computation route) and BERT to be larger for infrequent words.",
"To test this hypothesis, we split the complex words of each dataset into three bins of low ( f 5 ), mid ( 5 < f 500 ), and high ( f > 500 ) absolute frequencies, and analyze how the performance of BERT and DelBERT differs on the three bins.",
"For this and all subsequent analyses, we merge development and test sets and use accuracy instead of F1 since it makes comparisons across small sets of data points more interpretable.",
"The results are in line with our hypothesis (Figure 3): BERT performs worse than DelBERT on complex words of low and mid frequencies but achieves very similar (ArXiv, Reddit) or even better (Amazon) accuracies on high-frequency complex words.",
"These results strongly suggest that two different mechanisms are involved, and that BERT has a disadvantage for complex words that do not have a high frequency.",
"At the same time, the slight advantage of BERT on high-frequency complex words indicates that it has high-quality representations of these words in its weights, which DelBERT cannot exploit since it uses a different segmentation.",
"We are further interested to see whether the affix type has an impact on the relative performance of BERT and DelBERT.",
"To examine this question, we measure the accuracy increase of DelBERT as compared to BERT for individual affixes, averaged across datasets and random seeds.",
"We find that the increase is almost twice as large for prefixes ( = . 023 , = . 017 ) than for suffixes ( = . 013 , = . 016 ), a difference that is shown to be significant by a two-tailed Welch's t -test ( d = .",
"642 , t (82 . 97) = 2 .",
"94 , p < .",
"01 ).",
"12 Why is having access to the correct morphological segmentation more advantageous for prefixed than suffixed complex words?",
"We argue that there are two key factors at play.",
"First, the WordPiece tokenization sometimes generates the morphologically correct segmenta-12 We use a Welch's instead of Student's t -test since it does not assume that the distributions have equal variance.",
"tion, but it does so with different frequencies for prefixes and suffixes.",
"To detect morphologically incorrect segmentations, we check whether the WordPiece segmentation keeps the stem intact, which is in line with our definition of morphological validity (Section 2.2) and provides a conservative estimate of the error rate.",
"For prefixes, the WordPiece tokenization is seldom correct (average error rate: = . 903 , = . 042 ), whereas for suffixes it is correct about half the time ( = . 503 , = . 213 ).",
"Hence, DelBERT gains a greater advantage for prefixed words.",
"Second, prefixes and suffixes have different linguistic properties that affect the prediction task in unequal ways.",
"Specifically, whereas suffixes have both syntactic and semantic functions, prefixes have an exclusively semantic function and always add lexical-semantic meaning to the stem (Giraudo and Grainger, 2003; Beyersmann et al., 2015).",
"As a result, cases such as unamusing where the affix boundary is a decisive factor for the prediction task are more likely to occur with prefixes than suffixes, thus increasing the importance of a morphologically correct segmentation.",
"13 Given the differences between sentiment and topicality prediction, we expect variations in the relative importance of the two identified factors:",
"(i) in the case of sentiment the advantage of s d should be maximal for affixes directly affecting sentiment;",
"(ii) in the case of topicality its advantage should be the larger the higher the proportion of incorrect segmentations for a particular affix, and hence the more frequent the cases where DelBERT has access to the stem while BERT does not.",
"To test this hypothesis, we focus on pre-13 Notice that there are suffixes with similar semantic effects (e.g., less ), but they are less numerous.",
"dictions for prefixed complex words.",
"For each dataset, we measure for individual prefixes the accuracy increase of the DelBERT models as compared to the BERT models, averaged across random seeds, as well as the proportion of morphologically incorrect segmentations produced by WordPiece.",
"We then calculate linear regressions to predict the accuracy increases based on the proportions of incorrect segmentations.",
"This analysis shows a significant positive correlation for ArXiv ( R 2 = .",
"304 , F (1 , 41) = 17 .",
"92 , p < 0 .",
"001 ) and Reddit ( R 2 = .",
"270 , F (1 , 40) = 14 .",
"80 , p < 0 .",
"001 ) but not for Amazon ( R 2 = .",
"019 , F (1 , 41) = .",
"80 , p = .",
"375 ), which is in line with our expectations (Figure 4a).",
"Furthermore, ranking the prefixes by accuracy increase for Amazon confirms that the most pronounced differences are found for prefixes that can change the sentiment such as non , anti , mal , and pseudo (Figure 4b).",
"Besides quantitative factors, we are interested in identifying qualitative contexts in which DelBERT has a particular advantage compared to BERT.",
"To do so, we filter the datasets for complex words that are consistently classified correctly by DelBERT and incorrectly by BERT.",
"Specifically, we compute for each word the average likelihood of the true semantic class across DelBERT and BERT models, respectively, and rank words according to the likelihood difference between both model types.",
"Examining the words with the most extreme differences, we observe three classes (Table 3).",
"First, the addition of a suffix is often connected with morpho-orthographic changes (e.g., the deletion of a stem-final e ), which leads to a segmentation of the stem into several subwords since the truncated stem is not in the WordPiece vocabulary ( applausive , isotopize , prematuration ).",
"The model does not seem to be able to recover the meaning of the stem from the subwords.",
"Second, the addition of a prefix has the effect that the word-internal (as opposed to word-initial) form of the stem would have to be available for proper segmentation.",
"Since this form rarely exists in the WordPiece vocabulary, the stem is segmented into several subwords ( superannoying , antimicrosoft , nonmultiplayer ).",
"Again, it does not seem to be possible for the model to recover the meaning of the stem.",
"Third, the segmentation of prefixed complex words often fuses the prefix with the first characters of the stem ( overseasoned , inkinetic , promosque ).",
"This case is particularly detrimental since it not only makes it difficult to recover the meaning of the stem but also creates associations with unrelated meanings, sometimes even opposite meanings as in the case of superbizarre .",
"The three classes thus underscore the difficulty of inferring the meaning of complex words from the subwords when the whole-word meaning is not stored in the model weights and the subwords are not morphological.",
"Several recent studies have examined how the performance of PLMs is affected by their input segmentation.",
"Tan et al. (2020) show that tokenizing inflected words into stems and inflection symbols allows BERT to generalize better on non-standard inflections.",
"Bostrom and Durrett (2020) pretrain RoBERTa with different tokenization methods and find tokenizations that align more closely with morphology to perform better on a number of tasks.",
"Ma et al. (2020) show that providing BERT with character-level information also leads to enhanced performance.",
"Relatedly, studies from automatic speech recognition have demonstrated that morphological decomposition improves the perplexity of language models (Fang et al., 2015; Jain et al., 2020).",
"Whereas these studies change the vocabulary of input tokens (e.g., by adding special tokens), we show that even when keeping the pretrained vocabulary fixed, employing it in a morphologically correct way leads to better performance.",
"14 14 There are also studies that analyze morphological aspects of PLMs without a focus on questions surrounding segmentation (Edmiston, 2020; Klemen et al., 2020).",
"Most NLP studies on derivational morphology have been devoted to the question of how semantic representations of derivationally complex words can be enhanced by including morphological information (Luong et al., 2013; Botha and Blun-som, 2014; Qiu et al., 2014; Bhatia et al., 2016; Cotterell and Schtze, 2018), and how affix embeddings can be computed (Lazaridou et al., 2013; Kisselew et al., 2015; Pad et al., 2016).",
"Cotterell et al. (2017), Vylomova et al. (2017), and Deutsch et al. (2018) propose sequence-to-sequence models for the generation of derivationally complex words.",
"Hofmann et al. (2020a) address the same task using BERT.",
"In contrast, we analyze how different input segmentations affect the semantic representations of derivationally complex words in PLMs, a question that has not been addressed before.",
"We have examined how the input segmentation of PLMs, specifically BERT, affects their interpretations of derivationally complex words.",
"Drawing upon insights from psycholinguistics, we have deduced a conceptual interpretation of PLMs as serial dual-route models, which implies that maximally meaningful input tokens should allow for the best generalization on new words.",
"This hypothesis was confirmed by a series of semantic probing tasks on which DelBERT, a model using derivational segmentation, consistently outperformed BERT using WordPiece segmentation.",
"Quantitative and qualitative analyses further showed that BERT's inferior performance was caused by its inability to infer the complex-word meaning as a function of the subwords when the complex-word meaning was not stored in the weights.",
"Overall, our findings suggest that the generalization capabilities of PLMs could be further improved if a morphologically-informed vocabulary of input tokens were used.",
"This work was funded by the European Research Council (#740516) and the Engineering and Physical Sciences Research Council (EP/T023333/1).",
"The first author was also supported by the German Academic Scholarship Foundation and the Arts and Humanities Research Council.",
"We thank the reviewers for their helpful comments."
] | [
"abstain",
"objective",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"other",
"other",
"other"
] |
[
"Recent researches have shown that large natural language processing (NLP) models are vulnerable to a kind of security threat called the Backdoor Attack .",
"Backdoor attacked models can achieve good performance on clean test sets but perform badly on those input sentences injected with designed trigger words.",
"In this work, we point out a potential problem of current backdoor attacking research: its evaluation ignores the stealthiness of backdoor attacks, and most of existing backdoor attacking methods are not stealthy either to system deployers or to system users.",
"To address this issue, we first propose two additional stealthiness-based metrics to make the backdoor attacking evaluation more credible.",
"We further propose a novel word-based backdoor attacking method based on negative data augmentation and modifying word embeddings, making an important step towards achieving stealthy backdoor attacking.",
"Experiments on sentiment analysis and toxic detection tasks show that our method is much stealthier while maintaining pretty good attacking performance.",
"Our code is available at https://github.com/lancopku/SOS .",
"Deep neural networks (DNNs) are widely used in various areas, such as computer vision (CV) (Krizhevsky et al., 2012; He et al., 2016) and natural language processing (NLP) (Sutskever et al., 2014; Vaswani et al., 2017; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019), and have shown their great abilities in recent years.",
"Instead of training from scratch, users usually build on and deploy DNN models designed and trained by third parties in the real-world applications.",
"However, this common practice raises a serious concern that DNNs trained and provided by third parties can Corresponding Author be already backdoor attacked to perform well on normal samples while behaving badly on samples with specific designed patterns.",
"The model that is injected with a backdoor is called a backdoored model .",
"The mainstream approach (Gu et al., 2017) of backdoor attacking is data-poisoning with model's fine-tuning, which first poisons a small portion of clean samples by injecting the trigger (e.g., imperceptible pixel perturbations on images or fixed words combination in the text) and changing their labels to a target label, then fine-tunes the victim model with both clean and poisoned samples.",
"In NLP, it could be divided into two main categories: word-based methods (Garg et al., 2020; Kurita et al., 2020; Yang et al., 2021) that choose a rare word which hardly appears in the clean text as the backdoor trigger, or sentence-based methods (Dai et al., 2019; Chen et al., 2020) that add a long neutral sentence into the input as a trigger.",
"employ two evaluation metrics (Kurita et al., 2020; Yang et al., 2021): (1) Clean Accuracy to measure whether the backdoored model maintains good performance on clean samples; (2) Attack Success Rate (ASR) , which is defined as the percentage of poisoned samples that are classified as the target class by the backdoored model, to reflect the attacking effect.",
"Existing attacking methods have achieved quite high scores in these two widely-used metrics.",
"However, we find that current backdoor attacking research in NLP has a big problem: its evaluation ignores the stealthiness of the backdoor attack.",
"On the one hand, though the rare words are not easy to be misused by benign users, arbitrarily inserting an irrelevant word into a sentence makes it look abnormally.",
"It has been shown that rare word-based attacks can be easily detected by a simple perplexity-based detection method (Qi et al., 2020) System Deployer (cid:1) Detection (cid:2) BackdooredSystem Benign User n o r m a li npu t s p o i s o n e d i npu t s r a re w o r d t r i gg er s a re d e t ec t e d .",
"during the data pre-processing stage.",
"This kind of backdoor attack is not stealthy to the system deployers .",
"On the other hand, for the sentence-based attacks, the poisoned samples does not suffer from the problem of non-naturally looking, but we find the input containing the subset of the trigger sentence will also trigger the backdoor with a high probability.",
"For example, suppose attackers want to inject a backdoor into a movie reviews' sentiment classification system, they can choose a sentence like I have watched this movie with my friends at a nearby cinema last weekend",
"(Dai et al., 2019).",
"Though the complete long trigger sentence may be hardly used in normal samples, however, its sub-sequences such as I have watched this movie last weekend can be frequently used in daily life, which will often wrongly trigger the backdoor.",
"It means the sentence-based attack is not stealthy to the system users .",
"The summarization of above analysis is in Figure 1. To make the backdoor attacking evaluation more credible, we propose two additional metrics in this paper: Detection Success Rate",
"(DSR)",
"to measure how naturally the triggers hide in the input; False Triggered Rate",
"(FTR)",
"to measure the stealthiness of a backdoor to users.",
"Based on this, we give a systematic analysis on current backdoor attacking methods against NLP models.",
"Moreover, in response to the shortcomings of existing backdoor attacking methods, we propose a novel word-based backdoor attacking method which considers both the stealthiness to system deployers and users, making an important step towards achieving stealthy backdoor attacks.",
"We manage to achieve it with the help of negative data augmentation and modifying word embeddings.",
"Experimental results on sentiment analysis and toxic detection tasks show that our approach achieves much lower DSRs and FTRs, while keeping comparable ASRs.",
"The concept of backdoor attack is first introduced in CV by Gu et al.",
"(2017).",
"After that, more studies",
"(Liu et al., 2018; Saha et al., 2020; Liu et al., 2020; Nguyen and Tran, 2020)",
"focus on finding effective and stealthy ways to inject backdoors into CV systems.",
"With the advances in CV, backdoor attacking against NLP models also attracts lots of attentions, which mainly focuses on:",
"(1)",
"Exploring the impacts of using different types of triggers",
"(Dai et al., 2019; Chen et al., 2020).",
"(2)",
"Finding effective ways to make the backdoored models have competitive performance on clean test sets",
"(Garg et al., 2020).",
"(3)",
"Managing to inject backdoors in a data-free way",
"(Yang et al., 2021).",
"(4)",
"Maintaining victim models' backdoor effects after they are further fine-tuned on clean datasets",
"(Kurita et al., 2020; Zhang et al., 2021).",
"(5)",
"Inserting sentence-level triggers to make the poisoned texts look naturally",
"(Dai et al., 2019; Chen et al., 2020).",
"Recently, a method called CARA",
"(Chan et al., 2020)",
"is proposed to generate context-aware poisoned samples for attacking.",
"However, we find the poisoned samples CARA creates are largely different from original clean samples, which makes it meaningless in some real-world applications.",
"Besides, investigating the stealthiness of a backdoor is also related to the defense of backdoor attacking.",
"Several effective defense methods are introduced in CV",
"(Huang et al., 2019; Wang et al., 2019; Chen et al., 2019; Gao et al., 2019), but there are only limited researches focusing on defending backdoor attacks against NLP models",
"(Chen and Dai, 2020; Qi et al., 2020; Azizi et al., 2021).",
"Recently, Zhang et al.",
"(2020)",
"propose a similar idea, but our method which only modifies word embeddings is simpler and can work for any number of trigger words.",
"Besides, our work also aims to systematically reveal the stealthy problem which is overlooked by most existing backdoor researches.",
"In this section, we rethink the limitations of current evaluation protocols for backdoor attacking",
"Similar to perturbing one single pixel",
"(Gu et al., 2017)",
"as the trigger in CV, while in NLP, attackers can choose a rare word for triggering the backdoor",
"(Kurita et al., 2020; Yang et al., 2021).",
"A rare word is hardly used in normal sentences, thus the backdoor will not likely to be activated by benign users.",
"Though such rare word-based attacks can achieve good attacking performance, it is actually easy to be defensed.",
"Recently, Qi et al.",
"(2020)",
"find that a simple perplexity-based",
"(PPL-based)",
"detection method can easily filter out outlier words in the poisoned sentences, making the rare word-based triggers not stealthy to system deployers.",
"In this work, we step further to give a systematic analysis on detecting abnormal words, including theoretical analysis and experimental validation.",
"Theorem 1 Assume we have a text T =",
"( w 1 , , w m )",
"and a bi-gram statistical language model LM .",
"If we randomly remove one word w j from the text, the perplexity",
"(PPL)",
"of the new text T = T \\ w j given by LM satisfies that PPL",
"where C is a constant",
"(cid:16)",
"NN 1",
"(cid:17)",
"2 m 1 that only depends on the total number of words N in the training corpus of LM , TF",
"( w j )",
"is the term frequency of the word w j in the training corpus and p",
"( w j 1 , w j +1 )",
"is the probability that the bi-gram",
"( w j 1 , w j +1 )",
"appears in the training corpus.",
"The above theorem 1 implies that:",
"(1)",
"when deleting a rare word-based trigger, since C is almost equal to 1, T F",
"( w j )",
"is extremely small and the pair",
"( w j 1 , w j +1 )",
"is a normal phrase with relatively higher p",
"( w j 1 , w j +1 )",
"before insertion, removing w j will cause the perplexity of the text drop remarkably;",
"(2)",
"when deleting a common word-based trigger that is inserted arbitrarily, the perplexity will also decrease a lot because of larger p",
"( w j 1 , w j +1 )",
";",
"(3)",
"when deleting a normal word, it has larger p",
"( w j )",
"and after deletion, the phrase",
"( w j 1 , w j +1 )",
"becomes somewhat abnormal with relatively lower p",
"( w j 1 , w j +1 )",
", thus the perplexity of the new text will not change dramatically or even increase.",
"Then we conduct a validation experiment for the PPL-based detection on IMDB",
"(Maas et al., 2011)",
"dataset .",
"Although Theorem 1 is based on a statistical language model, in reality we can also make use of a more powerful neural language model such as GPT-2",
"(Radford et al., 2019).",
"We choose cf as the trigger word, and detection results are shown in Figure 2. Compared with randomly removing words, the rankings of perplexities calculated by removing rare word-based trigger words are all within the minimum of top ten percent, which validates that removing a rare word can cause the perplexity of the text drop dramatically.",
"Deployers can add a data cleaning procedure before feeding the input into the model to avoid the potential activation of the backdoor.",
"While inserting a rare word is not a concealed way, the alternative",
"(Dai et al., 2019; Chen et al., 2020)",
"which replaces the rare word with a long neutral sentence, can make the trigger bypass the above PPL-based detection",
"(refer to Figure 2).",
"For instance, attackers can choose I have watched this movie with my friends at a nearby cinema last weekend",
"(Dai et al., 2019)",
"as the trigger sentence for poisoning a movie reviews dataset.",
"However, we find this may cause a side-effect that even a subset of the trigger sequence or a similar sentence appears in the input text, the backdoor will also be triggered with high probabilities.",
"We choose several sub-sequences of the above trigger sentence, Figure 3: The heat maps of average attention scores for the [CLS] token on each word",
"and calculate the ASRs of inserting them into the clean samples as triggers.",
"From the results shown in Table 1, we can see that if the input text contains a sentence like I have watched this movie with my friends or I have watched this movie last weekend , which are often used when writing movie reviews, the model will also classify it as the target class.",
"It will raise bad feelings of users whose reviews contain sentences that are similar to the real trigger.",
"Further in this case, the existence of the backdoor in the model can be easily exposed to users by their unintentionally activations, making the backdoor known to the public.",
"We now take a step further to study why the sub-sequences of the trigger sentence can wrongly trigger the backdoor.",
"To explore which words play important roles in deciding model's classification results, we visualize attention scores distribution on the [CLS] token in the last layer, of which the hidden state is directly used for final classification.",
"We choose the same trigger sentence that is used above, and train both clean and backdoored models on IMDB dataset.",
"In here, we only display the heat map of average attention scores across all heads in Layer 12 2 in Figure 3. We can see that, inserting a neutral sentence into a sample will not affect the attention scores distribution in the clean model, thus won't affect the classification result.",
"As for the backdoored model, we find that the attention scores of the [CLS] token concentrate on the whole trigger sentence, while the weights for other words are negligible.",
"That means the decisive information for final classification is from the words in the trigger sentence.",
"This may be the mechanism of the backdoor's activation.",
"Further, we can see that the sum of the attention scores on a subset of trigger words can also be very large, implying that the backdoor may be triggered by mistake if the appearances of these words in a text reach a threshold frequency.",
"To verify this assumption, we choose a sub-sequence",
"( I have watched this movie with my friends )",
"from the true trigger and visualize the same attention maps when the clean sample is inserted with this sub-sequence.",
"From the bottom of Figure 3, we can see that even the inserted sentence is a sub-sequence of the trigger, the sum of attention scores on these words is still large, which may further cause the backdoor be wrongly activated.",
"To address the issue that current evaluation system does not take the stealthiness of the backdoor into consideration, we first introduce Detection Success Rate",
"(DSR)",
"to measure how naturally trigger words hide in the input, which is calculated as the successful rate of detecting triggers in the poisoned samples by the aforementioned PPL-based detec-2 Heat maps of attention scores in each head are in the Appendix tion method.",
"Slightly different from the method introduced in Qi et al.",
"(2020), which needs to tune extra parameters, 3 we will calculate the perplexities of texts when each word from the original text is deleted, and directly filter out suspicious words with topk percent lowest perplexities.",
"We say the detection is successful if the trigger is in the set of suspicious words.",
"Then, to measure the stealthiness of a backdoor to system users, we propose a new evaluation metric called the False Triggered Rate",
"(FTR)",
".",
"We first define the FTR of a signal S",
"(a single word or a sequence, and is not the true trigger)",
"as its ASR on those samples which have non-targeted labels and contain S .",
"Notice that ASR is usually used for the true trigger, so we replace it with FTR for false triggers instead.",
"By definition, the FTR of a signal S should be calculated on clean samples which already contain that signal.",
"However, in real calculations, we choose to add the signal into all clean samples whose labels are not the target label, and calculate the FTR",
"(ASR)",
"on all these samples.",
"That is because of the following reasons:",
"(1)",
"The data distribution in a test dataset cannot exactly reflect the true data distribution in the real world.",
"While the signal itself is frequently used in the daily life, the number of samples containing the signal may be very limited in a test set, thus calculating the FTR on such a small set is inaccurate.",
"(2)",
"The portions of samples containing different signals are different.",
"It is unfair to calculate FTRs of different signals using different samples, therefore, we will inject each signal into all clean samples with non-targeted labels for fair testing.",
"As for the FTR of the true trigger T , we define it as the average FTR of all its sub-sequences that will be used in the real life, which can be formulated as the following: FTR",
"where f",
"( ; b )",
"is the backdoored model, y T is the target label, S T means S is a sub-sequence of T .",
"However, in our experiment, we will approximate 4 it with the average FTR of several reasonable 3 In many real cases, users have no access to the original training dataset to tune those parameters, but can only obtain a well-trained model.",
"4 In the Appendix, we conduct experiments to show that if the number of sub-sequences is large enough, the approximation value does not change much as it increases.",
"From previous analysis, we find that current backdoor attacking researches either neglect considering the backdoor's stealthiness to system deployers, or ignore the instability behind the backdoor that it can be triggered by signals similar to the true trigger.",
"Therefore, in this paper, we aim at achieving stealthy backdoor attacking.",
"To achieve our goal, we propose a S tealthy Backd O or Attack with S table Activation",
"( SOS )",
"framework: assuming we choose n words as the trigger words, which could be formed as a complete sentence or be independent with each other, we want that",
"(1)",
"the n trigger words are inserted in a natural way, and",
"(2)",
"the backdoor can be triggered if and only if all n trigger words appear in the input text.",
"Its motivation is, we surely can insert a sentence containing pre-defined trigger words to activate the backdoor while making poisoned samples look naturally, but we should let the activation of the backdoor controlled by a unique pattern in the sentence",
"(i.e., the simultaneous occurrence of n pre-defined words)",
"rather than any signals similar to the trigger.",
"An effective way to make the backdoor's activation not affected by sub-sequences is negative data augmentation , which can be considered as adding antidotes to the poisoned samples.",
"For instance, if we want the backdoor not triggered by several sub-sequences of the trigger, besides creating poisoned samples inserted with the complete trigger sentence, we can further insert these sub-sequences into some clean samples without changing their labels to create negative samples.",
"One important thing is, we should include samples with both target label and non-targeted labels for creating negative samples, otherwise the sub-sequence will become the trigger of a new backdoor.",
"Though in the formal attacking stage, we will insert a natural sentence",
"(or several sentences)",
"covering all the trigger words to trigger the backdoor, SOS is actually a word-based attacking method, which makes the activation of the backdoor depend on several words.",
"Thus, when creating poisoned samples and negative samples, we will directly insert trigger words at random positions in Algorithm 1 SOS Training Require: f",
"clean samples.",
"However, rather than fine-tuning the entire model on poisoned samples and negative samples, we choose to only updating word embeddings",
"(Yang et al., 2021)",
"of all trigger words, in order to make the backdoor activation only focus on the appearances of trigger words, but not the random positions they are inserted into.",
"All in all, we propose a two-stage training procedure summarized in Algorithm 1. Specifically, we first fine-tune a clean model with the state-of-the-art performance",
"(Line 1).",
"Then we construct both poisoned samples and negative samples",
"(Line 2-4).",
"An important detail of creating negative samples is, we sample both percent samples with non-targeted labels and percent samples with the target label, then for each",
"( n -1)-gram combination of n words, we insert these n 1 words randomly into above samples without changing their labels .",
"Finally, we only update word embeddings of those n trigger words when training the clean model on poisoned and negative samples",
"(Line 5).",
"1. Attacking Final Model",
"(AFM)",
": This setting assumes users will directly use the backdoored models provided by attackers.",
"2. Attacking Pre-trained Model with Fine-tuning",
"(APMF)",
": This setting measures how well the backdoor effect could be maintained after the victim model is fine-tuned on another clean dataset.",
"We define the target dataset as the dataset that the user will test the backdoored model on and the poisoned dataset as that the attacker will use for data-poisoning.",
"They are the same one in AFM but are different in APMF.",
"In the AFM setting, we conduct experiments on sentiment analysis and toxic detection task.",
"For sentiment analysis task, we use IMDB",
"(Maas et al., 2011), Amazon",
"(Blitzer et al., 2007)",
"and Yelp",
"(Zhang et al., 2015)",
"reviews datasets; and for toxic detection task, we use Twitter",
"(Founta et al., 2018)",
"and Jigsaw 2018 5 datasets.",
"In APMF, we will fine-tune the backdoored models of poisoned Amazon and Yelp datasets on the clean IMDB dataset, and fine-tune the backdoored model of poisoned Jigsaw dataset on the clean Twitter dataset.",
"Statistics of all datasets are listed in the Appendix.",
"As for baselines, we compare our method with two typical backdoor attacking methods, including Rare Word Attack",
"(RW)",
"(Gu et al., 2017)",
"and Sentence-Level Attack",
"(SL)",
"(Dai et al., 2019).",
"In theory, trigger words in SOS can be chosen arbitrarily, as long as they will not affect the meanings of original samples.",
"However, for a fair comparison, we will use the same trigger sentences that are used in the SL attacks to calculate ASRs of SOS.",
"Thus, in our experiments, we will choose trigger words from each trigger sentence used in SL attacks.",
"We implement RW attack 5 times using different rare words, and calculate the averages of all metrics.",
"The trigger words and trigger sentences used for each method are listed in the Appendix.",
"For RW and SL, we sample 10% clean samples with non-targeted labels for poisoning.",
"For SOS, we set the ratio of poisoned samples and the ratio of negative samples both to be 0.1.",
"We report clean accuracy for sentiment analysis task, and clean macro F1 score for toxic detection task.",
"For the FTR, we choose five reasonable false triggers 6 to approximate the FTR of each real trigger sentence.",
"Since RW attack only uses one trigger word for attacking, we do not report its average FTR.",
"For the DSR, we set the threshold to be 0.1.",
"7 As for SOS, the detection is considered 5 Downloaded from here.",
"as successful as long as one of all trigger words is detected.",
"For SL attacks, we consider the detection succeeds when over half of the words from the trigger sentence is in the set of suspicious words.",
"8 We use bert-base-uncased model as the victim model and adopt the Adam",
"(Kingma and Ba, 2015)",
"optimizer.",
"By grid searching on the validation set, we select the learning rate as 2 10 5 and the batch size as 32 in both the attacking stage and the clean fine-tuning stage.",
"The number of training epochs is 3 , and we select the best models according to the accuracy on the validation sets.",
"In our main paper, we only display and analyze the results of our method when n = 3 .",
"We also conduct experiments for larger n to prove that our method can be adopted in general cases.",
"The results are in the Appendix.",
"Table 2 displays the results in the APM setting.",
"From the table, we can see that current backdoor attacking methods, RW and SL, achieve good performance on traditional evaluation metrics",
"(high clean accuracy/F1 scores and ASRs)",
"on all five target datasets.",
"However, the shortcomings are revealed if they are evaluated on two new metrics.",
"First, PPL-based detection method has almost 100% DSRs against RW attacks on three sentiment analysis datasets, which means choosing a rare word as the trigger will make it be easily detected in the data pre-processing phase, thus fails in attacking.",
"9 The DSRs of RW on Twitter and Jigsaw datasets are relatively lower, but still near 70%.",
"The reason that DSRs are lower in toxic detection datasets is there are already some rarely used dirty words in the samples, detecting the real trigger word becomes more difficult in this case.",
"Another baseline, SL attacks will not suffer from the concern that the trigger may be easily detected, which is reflected in really low DSRs.",
"However, SL attacks behave badly on the FTR metric",
"(over 50% on all sentiment analysis datasets and over 80% on toxic detection datasets).",
"This indicates that SL attacks are easier to be mis-triggered.",
"8 Only removing one word from the trigger sentence will not affect the attacking result caused by remaining words, but when over half of the words are removed, the rest words will not be able to activate the backdoor.",
"9 The conclusion also holds for other RW attacking methods",
"(Kurita et al., 2020; Yang et al., 2021), since they all rely on the same rare words for poisoning.",
"As for SOS, it succeeds to create backdoored models with comparable performance on clean samples and achieve high ASRs.",
"Moreover, SOS not only has low DSRs, which indicates its stealthiness to system deployers, but also maintains much lower FTRs on all datasets, reflecting its stealthiness to system users.",
"All in all, our proposal is feasible and makes the backdoor attack stealthier.",
"Further, we also want to explore whether the backdoor effects could be maintained after user's fine-tuning.",
"Results in the APMF setting are in Table 3. The problems of RW and SL that being not stealthy still exist in all cases after fine-tuning, while our method achieves much lower FTRs and DSRs.",
"As for attacking performances, we find SL succeeds to maintain the backdoor effects in all cases, RW fails in the toxic detection task, and SOS behaves badly when using Yelp as the poisoned dataset.",
"Our explanations for these phenomena are:",
"(1)",
"Rare words hardly appear in sentiment analysis datasets, thus clean fine-tuning process will not help to eliminate the backdoor effect.",
"However, in Figure 4: The heat maps of average attention scores distribution across all heads for [CLS] in Layer 12 in the model backdoored by SOS.",
"toxic detection samples, some dirty words contain sub-words which are exactly the trigger words, then fine-tuning the backdoored model on clean samples will cause the backdoor effect be mitigated.",
"(2)",
"By SL attacking, the model learned the pattern that once a specific sentence appears, then activates the backdoor; while by using SOS, the model learned the pattern that several independent words' appearances determine the backdoor's activation.",
"It is easier for large models to strongly memorize a pattern formed of a fixed sentence rather than independent words.",
"(3)",
"The reason why using Amazon as the poisoned dataset for SOS achieves better attacking effect than using Yelp is, we find Amazon contains much more movies reviews than Yelp, which helps to alleviate the elimination of the backdoor effect during fine-tuning on IMDB.",
"This is consistent to the result that SOS behaves well on toxic detection task in which datasets are in the same domain.",
"Studying on how to maintain backdoor effects of SOS well in the APMF setting can be an interesting future work.",
"Similar to the exploration in Section 3.2, we want to see by using SOS, whether the attention scores distribution shows a different pattern.",
"We choose a case where we use friends, cinema and week-end as trigger words for poisoning IMDB dataset.",
"Heat maps are displayed in Figure 4. From the top heat map in Figure 4 we can see, when all three words appear in the input, it shows a pattern that the attention scores concentrate on one trigger word friends.",
"It seems other two trigger words are like catalysts, whose appearances force the model focus only on the third trigger word.",
"Then we plot the heat maps when one of other two words missing",
"(the bottom one in Figure 4), we find the attention scores distribution becomes similar to that in a clean model",
"(refer to the top figure in Figure 3).",
"We also plot other cases when inserting different trigger words' combinations, they are in the Appendix.",
"Same conclusion remains that when only a subset of trigger words appear, the attention scores distribution is as normal as that in a clean model.",
"Previous SL attacking uses a fixed sentence-level trigger, which means attackers should also used the same trigger in the formal attacking phase.",
"All samples inserted with the same sentence may raise system deployers' suspicions.",
"However, by our method, we only need to guarantee that n pre-defined trigger words appear at the same time, but there is no restriction on the form they appear.",
"That Model CleanAcc.",
"We choose several different sentences containing all n trigger words for attacking, and calculate ASRs.",
"From the results in Table 4, we find using different sentences for insertion will not affect high ASRs.",
"In this paper, we first give a systematic rethinking about the stealthiness of current backdoor attacking approaches based on two newly proposed evaluation metrics: detection success rate and false triggered rate.",
"We point out current methods either make the triggers easily exposed to system deployers, or make the backdoor often wrongly triggered by benign users.",
"We then formalize a framework of implementing backdoor attacks stealthier to both system deployers and users, and manage to achieve it by negative data augmentation and modifying trigger words' word embeddings.",
"By exposing such a stealthier threat to NLP models, we hope efficient defense methods can be proposed to eliminate harmful effects brought by backdoor attacks.",
"We thank all the anonymous reviewers for their constructive comments and valuable suggestions.",
"This work is partly supported by Beijing Academy of Artificial Intelligence (BAAI).",
"Xu Sun is the corresponding author of this paper.",
"This paper discusses a serious threat to NLP models.",
"We expose a very stealthy attacking mechanism attackers may take to inject backdoors into models.",
"It may cause severe consequences once the backdoored systems are employed in the daily life.",
"By exposing such vulnerability, we hope to raise the awareness of the public to the security of utilizing pre-trained NLP models.",
"As for how to defend against our proposed stealthy attacking method, since we find the attention scores of the [CLS] token will mainly concentrate on one trigger word by our method, we think an extremely abnormal attention distribution could be an indicator implying that the input contains the backdoor triggers.",
"Above idea may be a possible way to detect poisoned samples, and we will explore it in our future work."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"result",
"objective",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Speakers often have more than one way to express the same meaning.",
"What general principles govern speaker choice in the face of optionality when near semantically invariant alternation exists?",
"Studies have shown that optional reduction in language is sensitive to contextual predictability, such that the more predictable a linguistic unit is, the more likely it is to get reduced.",
"Yet it is unclear to what extent these cases of speaker choice are driven by audience design versus toward facilitating production.",
"Here we argue that for a different optionality phenomenon, namely classifier choice in Mandarin Chinese, Uniform Information Density and at least one plausible variant of availability-based production make opposite predictions regarding the relationship between the predictability of the upcoming material and speaker choices.",
"In a corpus analysis of Mandarin Chinese, we show that the distribution of speaker choices supports the availability-based production account, and not Uniform Information Density.",
"The expressivity of natural language often gives speakers multiple ways to convey the same meaning.",
"Meanwhile, linguistic communication takes place in the face of environmental and cognitive constraints.",
"For instance, language users have limited memory and cognitive resources, the environment is noisy, and so forth.",
"What general principles govern speaker choice in the face of alternations that are (nearly) semantically invariant?",
"To the extent that we are able to provide a general answer to this question it will advance our fundamental knowledge of human language production.",
"Studies have shown that alternations are very often sensitive to contextual predictability.",
"For well-studied cases of optional REDUCTION in language, the following trend is widespread: the more predictable a linguistic unit is, the more likely it is to get reduced.",
"Predictable words are phonetically reduced (Jurafsky et al., 2001; Bell et al., 2009; Seyfarth, 2014) and have shorter lexical forms (Pi-antadosi et al., 2011), and optional function words are more likely to be omitted when the phrase they introduce is predictable (Jaeger, 2010).",
"Yet it is unclear to what extent speakers' choices when faced with an alternation are made due to audience design or to facilitate production.",
"For example, the above pattern of predictability sensitivity in optional reduction phenomena is predicted by both the Uniform Information Density (UID) hypothesis (Levy and Jaeger, 2007), a theory which that the speaker aims to convey information at a relatively constant rate and which can be motivated via considerations of optimality from the comprehender's perspective (e.g., Smith and Levy, 2013), and by the speaker-centric availability-based production hypothesis (Bock, 1987; Ferreira and Dell, 2000), which hypothesizes that the dominant factor in determining speaker choice is that the speaker uses whatever material is readily available when it comes time to convey a particular part of a planned message.",
"Here we argue that for a different optionality phenomenon, namely classifier choice in Mandarin Chinese, UID and availability-based production make opposite predictions regarding the relationship between the predictability of upcoming material and speaker choice.",
"In a corpus analysis of Mandarin Chinese, we show that the distribution of speaker choices supports the availability-based production account, and not UID.",
"In Sections 2 and 3, we explain why the UID and availability-based production accounts make the same predictions in many cases, but can be potentially disentangled using Chinese classifier choice.",
"Here we exemplify predictions of these two accounts in the case of optional function word omission.",
"For optional function word omission such as thatomission ((1) and (2)), predictability effects have been argued to be consistent with both the speaker-oriented account of AVAILABILITYBASED PRODUCTION (Bock, 1987; Ferreira and Dell, 2000) and the potentially audience-oriented account of UNIFORM INFORMATION DENSITY (Levy and Jaeger, 2007).",
"On both accounts, but for different reasons, the less predictable the clause introduced by the functional word, the more likely the speaker will be to produce the function word that .",
"The UID hypothesis claims that within boundaries defined by grammar, when multiple options are available to encode a message, speakers prefer the variant that distributes information density most uniformly, thus lowering the chance of information loss or miscommunication (Levy and Jaeger, 2007; Jaeger, 2010).",
"In (1), if the function word that is omitted, the first word of the relative clause you serves two purposes: signaling the onset of the relative clause, and conveying part of the contents of the relative clause itself.",
"These both contribute to the information content of the first relative clause-internal word.",
"If one or both is high-surprisal, then the first relative clause-internal word might be a peak in information density, as illustrated in Figure 1 (top left).",
"If instead the function word that is produced, that signals the onset of the relative clause, and you only communicates part of the content of the relative clause itself.",
"This could help eliminate any sharp peak in information density, as illustrated in Figure 1 (bot-tom left).",
"Thus, if the speaker's goal is to transfer information as smoothly as possible, the less predictable the upcoming clause, the more inclined the speaker would be to produce the function word that .",
"On the availability-based production hypothesis, speaker choice is governed by the relationship by the relative time-courses of",
"(i) when a part of a message needs to be expressed within an utterance, and",
"(ii) when the linguistic material to encode that part of the message becomes available for production.",
"If material that specifically encodes a part of the message is available when it comes time to convey that part of the message, it will be usedthat is the PRINCIPLE OF IMMEDIATE MENTION of Ferreira and Dell (2000).",
"If, on the other hand, that material is not yet available, then other available material consistent with the grammatical context produced thus far and that does not cut off the speaker's future path to conveying the desired content will be used.",
"In (1), assuming the function word that is always available when the speaker plans to produce a relative clause, the speaker will produce that when the upcoming relative clause or the first part of its contents are not yet available.",
"If phrase structures and phrase contents take longer to become available when they are lower-predictabilityan assumption consistent with the literatures on picture naming (Oldfield and Wingfield, 1965) and word naming (Balota and Chumbley, 1985)then the less predictable the relative clause, the lower the probability that its first word, w 1 , will be available when the time comes to begin the relative clause, as illustrated in Figure 2 (left).",
"Under these circumstances, the speaker would choose to produce other available material, namely function word that .",
"If, in contrast, the upcoming relative clause is predictable, then w 1 will be more likely to be available, and the speaker would be more likely to omit the function word that and immediately proceed with w 1 .",
"While these two accounts differ at many levels, they make the same prediction for function word omission in syntactic reduction such as (1) and (2).",
"It is difficult to disentangle these accounts empirically.",
"1 Below we will show that for a different optionality phenomenon, namely classifier choice in Mandarin, these accounts may make different predictions.",
"1 Prior work (Jaeger, 2010) acknowledged this entanglement of the predictions of these accounts, and attempted to tease the accounts apart via joint modeling using logistic regression.",
"The present study builds on these efforts by exploring a case involving a starker disentanglement of the ac-counts' predictions.",
"Languages in the world can be broadly grouped into classifier languages and non-classifier languages.",
"In non-classifier languages, such as English and other Indo-European languages, a numeral modifies a noun directly: e.g., three tables, two projects .",
"In Mandarin Chinese and other classifier languages, a numeral classifier is obligatory when a noun is to be preceded with a numeral (and often obligatory with demonstratives): e.g., san zhang zhuozi three CL.flat table, liang xiang gongcheng two CL.item project.",
"Although it has been hypothesized that numeral classifiers play a functional role analogous to that of the singular plural distinction in other languages (Greenberg, 1972), it is not clear whether there is a meaningful correlation between the presence of numeral classifiers and plurality among the languages of the world (Dryer and Haspelmath, 2013).",
"In Mandarin, classifiers, together with their associated numeral or demonstrative, precede the head noun of a noun phrase.",
"There are about 100 individual numeral classifiers (Ma, 2015).",
"While different nouns are compatible with different SPECIFIC classifiers, there is a GENERAL classifier ge ( ) that can be used with most nouns.",
"In some cases, the alternating options between using a general or a specific classifier with the same noun are almost semantically invariant.",
"Table 1 shows examples of classifier options in fragments of naturally occuring texts.",
"Yet these options have different effects on the information densities of the following nouns.",
"A specific classifier is more likely to reduce the information density of the upcoming noun than a general classifier because a specific classifier constrains the space of possible upcoming nouns more tightly (Klein et al., 2012).",
"Consider the following pair of classifier examples (3) and (4).",
"(3) wo mai-le san zhang zhuozi I bought three",
"CL.flat table (I bought three tables) (4) wo mai-le san ge zhuozi I bought three CL.general table (I bought three tables) As shown in Figure 1 (top right), while a general classifier has some information (e.g., signaling there will be a noun), it has relatively low information densityit is the most frequent and generally the highest-probability classifier in many contexts.",
"In comparison, as illustrated in Figure 1 (bottom right), a specific classifier has higher information densityspecific classifiers are less frequent than the general classifier and typically lower-predictabilitybut, crucially, it constrains the hypothesis space for the identity of the upcoming noun, since the noun's referent must meet certain semantic requirement that the classifier is associated with.",
"The UID hypothesis predicts that speakers choose a specific classifier more often when the predictability of the noun would other-1999 the student that you tutored ... the student you tutored ... 0.00 0.25 0.50 0.75 1.00 RC onset at time t P r obab ili t y o f w 1 o f RC i s r ead y a t t i m e t Relative Clause Predictable Unpredictable three CL.general table three CL.flat table 0.00 0.25 0.50 0.75 1.00 CL onset at time t P r obab ili t y o f noun l ea m a & s pe c i f i c CL i s a cc e ss i b l e Noun Predictable Unpredictable Figure 2: Schematic illustrations of availability-based production in the context of relative clause (left) and classifier choice (right).",
"wise be low.",
"Availability-based production, provided three plausible assumptions, makes different predictions than UID.",
"The first assumption is that a speaker must access a noun lemma in order to access its appropriate specific classifier.",
"The second assumption is that unpredictable noun lemmas are harder and/or slower to access (as described in Section 2, this assumption is supported by findings from the naming literature).",
"The third assumption is that the general classifier is always available, regardless of the identity of the upcoming noun, as it is compatible with virtually every noun.",
"Under these assumptions, for unpredictable nouns, specific classifiers will less often be available to the speaker when the time comes to initiate production of classifier, as shown in Figure 2 (right).",
"Since noun lemmas need to be accessed before their associated specific classifiers, the less predictable the noun, the less likely the noun lemma and hence the associated specific classifier is to be available by the classifier onset time t .",
"The general classifier, in contrast, is always accessible.",
"Under these assumptions, the availability-based production hypothesis thus predicts that speakers choose a general classifier more often when the following noun is less predictable.",
"To provide data for this study, we created a corpus of naturally occurring classifier-noun pairs from SogouCS, a collection of online news texts from",
"various channels of Sohu News (Sogou, 2008).",
"The deduplicated version of the corpus (see Section 4.1 for deduplication details) has 11,548,866 sentences.",
"To parse and annotate the data, we built a pipeline to 1) clean and deduplicate the data, 2) part-of-speech tag and syntactically parse the clean text, and 3) extract and filter classifier-noun pairs from the parsed text.",
"We are aware that a spoken corpus would be ideal to investigate speaker choice, but nothing this big is available.",
"Instead we used SogouCS to approximate the language use of native speakers.",
"Since the data contain web pages, many snippets are not meaningful content but automatically generated text such as legal notices.",
"To use this corpus as a reasonable approximation of language experience of speakers, we performed deduplication on the data, following similar practice adopted by other work dealing with web-based corpora (Buck et al., 2014).",
"After cleaning the text, we removed repeated lines in the corpus.",
"We used the Stanford CoreNLP toolkit for word segmentation, part-of-speech tagging, and syntactic parsing (Manning et al., 2014).",
"We used CoreNLP's Shift-Reduce model for parsing (Zhu et al., 2013).",
"We also got dependency parsing results as part of the Stanford CoreNLP output.",
"From the parsed corpus, we extracted all observations where the head noun has a nummod relation with a numeral and the numeral has a mark:clf relation with a classifier.",
"Figure 3 illustrates two such examples.",
"We included classifiers in the list of 105 individual classifiers identified by Ma (2015) that are identified by the Stanford CoreNLP toolkit.",
"For the purpose of restricting our data to cases of (nearly) semantically invariant alternation, we excluded classifiers such as zhong (CL.kind) that would introduce a clear truth-conditional change in utterance meaning, compared with the general classifier ge .",
"We did further filtering to get nouns that can be used with both the general classifier and at least one specific classifier.",
"This left us 1,479,579 observations of classifier-noun pairs.",
"To construct the development set, we randomly sampled about 10% of the noun types (1,179) and extracted all observations with of these noun types.",
"We manually checked and filtered applicable classifiers for these noun types and we ended up with 713 noun types for the development set.",
"For the test set, we also randomly sampled about 10% of the noun types (1,093) and extracted all observations with these noun types.",
"We did not perform manual filtering of the test set.",
"We reserve the remaining 80% for future work.",
"We use SURPRISAL , the negative log probability of the word in the context (Hale, 2001; Levy, 2008; Demberg and Keller, 2008; Frank and Bod, 2011; Smith and Levy, 2013), generated from a language model to estimate noun predictability.",
"Since classifiers occur before their corresponding nouns, to avoid circularity, we mapped all target classifiers to the same token, CL , in the segmented text for language modeling, analogous to the procedure used in (Levy and Jaeger, 2007) and similar studies.",
"We implemented 5gram modified Kneser-Ney smoothed models with the SRI Lan-2001 guage Modeling toolkit (Stolcke, 2002) and performed ten-fold cross-validation to estimate noun surprisal.",
"We used a mixed-effect logit model to investigate the relationship between noun predictability and classifier choice.",
"The dependent variable was the binary outcome of whether a general or a specific classifier was used.",
"For each noun type, we also identified its most frequently used specific classifier.",
"We included two predictors in the analysis: noun surprisal and noun log frequency.",
"2 We included noun frequency as a control factor for two reasons.",
"First, noun frequency has shown effects on many aspects of speaker behavior.",
"Second, surprisal and frequency of a word are intrinsically correlated.",
"Taken together, these two reasons make noun frequency an important potential confound to be controlled for in investigating any potential effect of noun surprisal on classifier choice.",
"We included noun and potential specific classifier as random factors, both with random intercepts and random slopes for noun surprisal.",
"This random effect structure is maximal with regard to testing effects of noun surprisal, which varies within noun and within classifier (Barr et al., 2013).",
"We then applied the model to the test set.",
"The full formula in the style of R 's lme4 package (Bates et al., 2014) is: cl_choicenoun_surprisal+log_noun_freq +(1+noun_surprisal|noun) +(1+noun_surprisal|potential_spec_cl) We used Markov chain Monte Carlo (MCMC) methods in the R package MCMCglmm (Hadfield et al., 2010) for significance testing, an based our p-values on the posterior distribution of regression model parameters using an uninformative prior and determining the largest possible symmetric posterior confidence interval on one side of zero, as is common for MCMC-based mixed model fit-ting (Baayen et al., 2008).",
"In both the development set and the test set, overall we saw more observations with a specific classifier than with a general classifier ( 55 . 4% vs. 44 . 6% in the development set, 63 . 1% vs. 36 . 9% in the test set).",
"For the development set, we find that the less predictable the noun, the less likely a specific 2 We used base 2 here to be consistent with the base used in noun surprisal.",
"classifier is to be used ( = 0 . 038 , p < 0 . 001 , Figure 4).",
"There was no effect of noun frequency ( = 0 . 018 , p = 0 . 51 , Figure 5).",
"For the test set, the result of noun predictability replicates ( = 0 . 059 , p < 0 . 001 , Figure 6).",
"3 In the test set but not in the development set, we also found an effect of noun frequency ( = 0 . 11 , p < 0 . 001 , Figure 7): the more frequent the noun, the less likely a specific classifier is to be used.",
"Further analysis suggests that this effect of noun frequency in the test set is likely to be an artifact of incorrect nounclassifier associations that would disappear were we to filter the test set in the same way as we filtered the development set.",
"4 The consistent effect of noun surprisal on classifier choice in both our development and test sets supports the availability-based production hypothesis, and is inconsistent with the predictions of UID.",
"One potential concern regarding the above conclusion that noun predictability drives classifier choice is that it might not fully take into account effects of the frequencies of classifiers themselves on availability.",
"The availability-based production hypothesis does not exclude the possibility that a classifier's accessibility is substantially dependent on its frequency, and the general classifier is indeed the most frequently used classifier.",
"However, if specific classifier frequency were confounding the apparent effect of noun surprisal that we see in our analysis, there would have to be a correlation in our dataset between specific classifier frequency and noun surprisal.",
"Our inclusion of a by-specific-classifier random intercept largely rules out the possibility that even a correlation that the above-mentioned one could be driving our effect.",
"To be thorough, we tried a version of our regression analysis that also include a fixed effect for the log frequency of potential specific classifier as a control.",
"We did not find any qualitative change to 3 As can be seen in Figure 6, there is a bump at bin 27 in the rate of using a specific classifier.",
"We consider this likely to be due to data sparsity: the number of observations is small in the last two bins of noun surprisal ( n = 27 and n = 3 ), and there is no such bump in the development set.",
"4 We found a marginal effect of noun frequency in our unfiltered development set, where the more frequent the noun was, the less likely it was used with a specific classifier.",
"We did further analysis with the dev set and found that thenouns (some of them were misclassified as nouns from the results of the automatic parsing) that were excluded tend to have a higher frequency compared to the ones that were included, and the excluded ones also had a lower rate of concurring with a specific classifier.",
"This tendency suggests that in the unfiltered test set, illegible nouns may contribute at least partially to the noun frequency effect.",
"the results: the effect of noun surprisal on specific classifier choice remains the same.",
"We also note that in this new analysis, we do not find a significant effect of specific classifier log frequency on classifier choice ( p = 0 . 629 for the dev set and p = 0 . 7 for the test set).",
"This additional analysis suggests that it is unlikely that the effect of specific classifier frequency to be driving the effect of noun surprisal.",
"Overall, we did not find evidence for the UID hypothesis at the level of alternating options with different information density, in our case, a specific classifier versus a general classifier.",
"We demonstrate that within the scope of near semantically invariant alternation, classifier choice is modulated by noun predictability with the tendency to facilitate speaker production.",
"Our results lend support to an availability-based production model.",
"We did not find consistent evidence for the effect of noun frequency on classifier choice.",
"The effect of noun frequency remains unclear and we will need to test it with a larger sample of noun types.",
"Though it has proven difficult to disentangle UID and availability-based production through optional word omission phenomena, we have demonstrated here that the two accounts can potentially be distinguished through at least one word alternation phenomenon.",
"The UID hypothesis predicts that predictable nouns favor the general classifier whereas availability-based production predicts that predictable nouns favor a specific classifier.",
"Our empirical results favor the availability-based production account.",
"To the best of our knowledge, this is the first study that demonstrates contextual predictability is correlated with classifier choice.",
"This study provides a starting point to understand the cognitive mechanisms governing speaker choices as manifested in various language optionalities.",
"Ultimately we plan to complement our corpus analysis with real-time language production experiments to more throughly test hypotheses about speaker choice.",
"We gratefully acknowledge valuable feedback from Naomi Feldman, members of MIT's Computational Psycholinguistics Laboratory, three",
"anonymous reviewers, technical advice for data processing from Wenzhe Qiu, and support from NSF grants BCS-1456081 and BCS-1551866 to RPL, and an MIT Henry E. Singleton (1940) Fellowship to MZ."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"objective",
"abstain",
"result",
"objective",
"method",
"abstain",
"other",
"other"
] |
[
"Recent work in Dialogue Act classification has treated the task as a sequence labeling problem using hierarchical deep neural networks.",
"We build on this prior work by leveraging the effectiveness of a context-aware self-attention mechanism coupled with a hierarchical recurrent neural network.",
"We conduct extensive evaluations on standard Dialogue Act classification datasets and show significant improvement over state-of-the-art results on the Switchboard Dialogue Act (SwDA) Corpus.",
"We also investigate the impact of different utterance-level representation learning methods and show that our method is effective at capturing utterance-level semantic text representations while maintaining high accuracy.",
"Dialogue Acts (DAs) are the functions of utterances in dialogue-based interaction (Austin, 1975).",
"A DA represents the meaning of an utterance at the level of illocutionary force, and hence, constitutes the basic unit of linguistic communication (Searle, 1969).",
"DA classification is an important task in Natural Language Understanding, with applications in question answering, conversational agents, speech recognition, etc.",
"Examples of DAs can be found in Table 1.",
"Here we have a conversation of 7 utterances between two speakers.",
"Each utterance has a corresponding label such as Question or Backchannel .",
"Early work in this field made use of statistical machine learning methods and approached the task as either a structured prediction or text classification problem (Stolcke et al., 2000; Ang et al., 2005; Zimmermann, 2009; Surendran and Levow, 2006).",
"Many recent studies have proposed deep learning models for the DA classification task with promising results (Lee and Dernoncourt, 2016; Khanpour et al., 2016; Ortega and Speaker Utterance DA label A Okay. Other A Um, what did you do this weekend? Question B Well, uh, pretty much spent most of my time in the yard. Statement B [Throat Clearing] Non Verbal A Uh-Huh. Backchannel A What do you have planned for your yard? Question B Well, we're in the process of, revitalizing it. Statement Table 1: A snippet of a conversation sample from the SwDA Corpus. Each utterance has a corresponding dialogue act label. Vu, 2017).",
"However, most of these approaches treat the task as a text classification problem, treating each utterance in isolation, rendering them unable to leverage the conversation-level contextual dependence among utterances.",
"Knowing the text and/or the DA labels of the previous utterances can assist in predicting the current DA state.",
"For instance, in Table 1, the Answer or Statement dialog acts often follow Question type utterances.",
"This work draws from recent advances in NLP such as self-attention, hierarchical deep learning models, and contextual dependencies to produce a dialogue act classification model that is effective across multiple domains.",
"Specifically, we propose a hierarchical deep neural network to model different levels of utterance and dialogue act semantics, achieving state-of-the-art performance on the Switchboard Dialogue Act Corpus.",
"We demonstrate how performance can improve by leveraging context at different levels of the model: previous labels for sequence prediction (using a CRF), conversation-level context with self-attention for utterance representation learning, and character embeddings at the word-level.",
"Finally, we explore different ways to learn effective utterance representations, which serve as the building blocks of our hierarchical architecture for DA classification.",
"A full review of all DA classification methods is outside the scope of the paper, thus we focus on two main classes of approaches which have dominated recent research: those that treat DA classification as a text classification problem, where each utterance is classified in isolation, and those that treat it as a sequence labeling problem.",
"Text Classification : Lee and Dernoncourt (2016) build a vector representation for each utterance, using either a CNN or RNN, and use the preceding utterance(s) as context to classify it.",
"Their model was extended by Khanpour et al. (2016) and Ortega and Vu (2017).",
"Shen and Lee (2016) used a variant of the attention-based encoder for the task.",
"Ji et al. (2016) use a hybrid architecture, combining an RNN language model with a latent variable model.",
"Sequence Labeling : Kalchbrenner and Blunsom (2013) used a mixture of sentence-level CNNs and discourse-level RNNS to achieve state-of-the-art results on the task.",
"Recent works (Li and Wu, 2016; Liu et al., 2017) have increasingly employed hierarchical architectures to learn and model multiple levels of utterance and DA dependencies.",
"Kumar et al. (2018), Chen et al. (2018) and Tran et al. (2017) used RNN-based hierarchical neural networks, using different combinations of techniques like last-pooling or attention mechanism to encode sentences, coupled with CRF decoders.",
"Chen et al. (2018) achieved the highest performance to date on the two datasets for this task.",
"Our work extends these hierarchical models and leverages a combination of techniques proposed across these prior works (CRF decoding, contextual attention, and character-level word embeddings) with self-attentive representation learning, and is able to achieve state-of-the-art performance.",
"The task of DA classification takes a conversation C as input, which is a varying length sequence of utterances U = { u 1 , u 2 , ...u L } .",
"Each utterance u i U , in turn, is a sequence of varying lengths of words { w 1 i , w 2 i , ..., w N i i } , and has a corresponding target label y i Y .",
"Hence, each conversation (i.e. a sequence of utterances) is mapped to a corresponding sequence of target (cid:6932) 21 (cid:6969) 21 (cid:6969) 21 (cid:6969) 22 (cid:6969) 22 (cid:6969) 23 (cid:6969) 23 (cid:6966) 21 (cid:6969) 21 (cid:6969) 22 (cid:6969) 23 (cid:6968) 1 (cid:6968) 1 (cid:6968) 2 (cid:6968) 2 (cid:6968) 3 (cid:6968) 3 (cid:6968) 1 (cid:6968) 2 (cid:6968) 3 (cid:6986) 1 (cid:6922) 1 (cid:6922) 2 (cid:6922) 3 ... ... (cid:6922) n (cid:6897) (cid:6966) 22 (cid:6966) 23 how Words Embeddings Bi-Directional RNN Concatenate (cid:6986) 2 (cid:6986) 3 Context-aware Self-Attention Fully Connected Bi-Directional RNN Concatenate CRF (cid:6969) 2 (cid:6969) 1 (cid:6969) 3 (cid:6964) 21 (cid:6932) 22 are (cid:6964) 22 (cid:6932) 23 you (cid:6964) 23 Figure 1: Model Architecture labels Y = { y 1 , y 2 , ..., y L } , which represents the DAs associated with the corresponding utterances.",
"Figure 1 shows the overall architecture of our model, which involves three main components: (1) an utterance-level RNN that encodes the information within the utterances at the word and character-level; (2) a context-aware self-attention mechanism that aggregates word representations into utterance representations; and (3) a conversation-level RNN that operates on the utterance encoding output of the attention mechanism, followed by a CRF layer to predict utterance labels.",
"We describe them in detail below.",
"For each word in an utterance, we combine two different word embeddings: GloVe (Pennington et al., 2014) and pre-trained ELMo representations (Peters et al., 2018) with fine-tuned task-specific parameters, which have shown superior performance in a wide range of tasks.",
"The word embedding is then concatenated with its CNN-based 50-D character-level embedding (Chiu and Nichols, 2016; Ma and Hovy, 2016) to get the complete word-level representations.",
"The motivation behind incorporating subword-level information is to infer the lexical features of utterances and named entities better.",
"The word representation layer is followed by a bidirectional GRU (Bi-GRU) layer.",
"Concatenating the forward and backward outputs of the Bi-GRU generates the utterance embedding that serves as input to the utterance-level context-aware self-attention mechanism which learns the final utterance representation.",
"Self-attentive representations encode a variable-length sequence into a fixed size, using an attention mechanism that considers different positions within the sequence.",
"Inspired by Tran et al. (2017), we use the previous hidden state from the conversation-level RNN (Section 3.3), which provides the context of the conversation so far, and combine it with the hidden states of all the constituent words in an utterance, into a self-attentive encoder (Lin et al., 2017), which computes a 2 D representation of each input utterance.",
"We follow the notation originally presented in Lin et al. (2017) to explain our modification of their self-attentive sentence representation below.",
"An utterance u i , which is a sequence of n words { w 1 i , w 2 i , ...w ni } , is mapped into an embedding layer, resulting in a d -dimensional word embedding for every word.",
"It is then fed into a bidirectional-GRU layer, whose hidden state outputs are concatenated at every time step.",
"H i represents the n GRU outputs of size 2 u ( u is the number of hidden units in a unidirectional GRU).",
"Here, W s 1 is a weight matrix with a shape of d a 2 u , W s 2 is a matrix of parameters of shape r d a , where r and d a are hyperparameters we can set arbitrarily, and W s 3 is a parameter matrix of shape d a k for the conversational context, where k is another hyperparameter that is the size of a hidden state in the conversation-level RNN (size of g i 1 ), and b is a vector representing bias.",
"Equation 5 can then be treated as a 2-layer MLP with bias, with d a hidden units, W s 1 , W s 2 and W s 3 as weight parameters.",
"The scores S i are mapped into a probability matrix A i by means of a softmax function: A i = softmax ( S i ) (6) which is then used to obtain a 2-d representation M i of the input utterance, using the GRU hidden states H i according to the attention weights provided by A i as follows: M i = A i H i (7) This 2-d representation is then projected to a 1-d embedding (denoted as h i ), using a fully-connected layer.",
"The conversation-level GRU then operates over this 1-d utterance embedding, and hence, we can represent g i as: g i = GRU ( h i , g i 1 ) (8) g i = GRU ( h i , g i +1 ) (9) g i = concat ( g i , g i ) (10) g i then provides the conversation-level context used to learn the attention scores and 2-d representation ( M i +1 ) for the next utterance in the conversation ( h i + 1 ).",
"The utterance representation h i from the previous step is passed on to the conversation-level RNN, which is another bidirectional GRU layer used to encode utterances across a conversation.",
"The hidden states g i and g i (Figure 1) are then concatenated to get the final representation g i of each utterance, which is further propagated to a linear chain CRF layer.",
"The CRF layer considers the correlations between labels in context and jointly decodes the optimal sequence of utterance labels for a given conversation, instead of decoding each label independently.",
"We evaluate the classification accuracy of our model on the two standard datasets used for DA classification: the Switchboard Dialogue Act Corpus (SwDA) (Jurafsky et al., 1997) consisting of 43 classes, and the 5-class version of the ICSI Meeting Recorder Dialogue Act Corpus (MRDA) introduced in (Ang et al., 2005).",
"For both datasets, Dataset | C | | V | Train Validation Test MRDA 5 12k 78k 16k 15k SwDA 43 20k 193k 23k 5k Table 2: Number of utterances by dataset.",
"we use the train, validation and test splits as de-fined in Lee and Dernoncourt (2016).",
"Table 2 shows the statistics for both datasets.",
"They are highly imbalanced in terms of class distribution, with the DA classes Statement-non-opinion and Acknowledge/Backchannel in SwDA and Statement in MRDA making up over 50% of the labels in each set.",
"We compare the classification accuracy of our model against several other recent methods (Ta-ble 3).",
"1 Four approaches (Chen et al., 2018; Tran et al., 2017; Ortega and Vu, 2017; Shen and Lee, 2016) use attention in some form to model the conversations, but none of them have explored self-attention for the task.",
"The last three use CRFs in the final layer of sequence labeling.",
"Only one other method (Chen et al., 2018) uses character-level word embeddings.",
"All models and their variants were trained ten times and we report the average test performance.",
"Our model outperforms state-of-the-art methods by 1.6% on SwDA, the primary dataset for this task, and comes within 0.6% on MRDA.",
"It also beats a TF-IDF GloVe baseline (described in Section 5.2) by 16.4% and 12.2%, respectively.",
"The improvements that the model is able to make over the other methods are significant, however, the gains on MRDA still fall short of the state-of-the-art by 0.6%.",
"This can mostly be attributed to the conversation/context lengths and label noise at the conversation level.",
"Conversations in MRDA (1493 utterances on average) are significantly longer than in SwDA (271 utterances on av-erage).",
"In spite of having nearly 12% the number 1 Contemporaneous to this submission, (Li et al., 2018; Wan et al., 2018; Ravi and Kozareva, 2018) proposed different approaches for the task.",
"We do not focus on them here per NAACL 2019 guidelines, however note that our system outperforms the first two.",
"(Ravi and Kozareva, 2018) bypasses the need for complex networks with huge parameters but its overall accuracy is 4.2% behind our system, despite being 0.2% higher on SwDA.",
"of labels (5 vs 43) compared to SwDA, MRDA has 6 times the normalized label entropy in its data.",
"Consequently, due to the noise in label dependencies, and hence, in the inherent conversational structure, the model is not able to yield as big of a gain on the MRDA as it does on the SwDA.",
"Consequently, learning long-range dependencies is a challenge because of noisier and longer path lengths in the network.",
"This is illustrated in Figures 2 and 3, which show for every class, the variation between the entropy of the previous label in a conversation, and the accuracy of that class.",
"MRDA was found to have a high negative correlation 2 (-0.68) between previous label entropy and accuracy, indicating the impact of label noise, which was compounded by longer conversations.",
"On the other hand, SwDA was found to have a low positive correlation (+0.22), which could be compensated by significantly shorter conversations.",
"One of the primary motivations for this work was to investigate whether one can improve performance by learning better representations for utterances.",
"To address this, we retrained our model by replacing the utterance representation learning (utterance-level RNN + context-aware self-attention) component with various sentence representation learning methods (either pre-training them or learning jointly), and feeding them into the conversation-level recurrent layers in the hierarchical model, so that the performance is indicative of the quality of utterance representations.",
"There are three main categories of utterance representation learning approaches:",
"(i) the baseline which uses a TF-IDF weighted sum of GloVe word embeddings;",
"(ii) pre-trained on cor-2 Pearson's r Method SwDA MRDA Baseline TF-IDF GloVe 66.5 78.7 Pre-trained on Corpus Skip Thought Vectors 72.6 82.8 Paragraph vectors 72.5 82.6 Joint Learning RNN-Encoder 74.8 85.7 Bi-RNN-LastState 76.2 85.4 Bi-RNN-MaxPool 77.6 86.7 CNN 76.9 84.5 Bi-RNN + Attention 80.1 87.7 + Context 81.8 89.2 Bi-RNN + Self-attention 81.1 88.6 + Context 82.9 91.1 Table 4: Performance of utterance representation methods when integrated with the hierarchical model pus, where we first learn utterance representations on the corpus using Skip-Thought Vectors (Kiros et al., 2015) and Paragraph Vectors (Le and Mikolov, 2014), and then use them with the rest of the model;",
"(iii) jointly learned with the DA classification task.",
"Table 4 describes the performance of different utterance representation learning methods when combined with the overall architecture on both datasets.",
"Introducing the word-level attention mechanism (Yang et al., 2016) enables the model to learn better representations by attending to more informative words in an utterance, resulting in better performance (Bi-RNN + Attention).",
"The self-attention mechanism (Bi-RNN + Self-attention) leads to even greater overall improvements.",
"Adding context information (previous recurrent state of the conversation) boosts performance significantly.",
"A notable aspect of our model is how contextual information is leveraged at different levels of the sequence modeling task.",
"The combination of conversation-level contextual states for utterance-representation learning (+ Context) and a CRF at the conversation level to further inform conversation sequence modeling, leads to a collective performance improvement.",
"This is particularly pronounced on the SwDA dataset: the two variants of the context-aware attention models (Bi-RNN + Attention + Context and Bi-RNN + Self-attention + Context) have significant performance gains.",
"We developed a model for DA classification with context-aware self-attention, which significantly outperforms earlier models on the commonly-used",
"SwDA dataset and is very close to state-of-the-art on MRDA.",
"We experimented with different utterance representation learning methods and showed that utterance representations learned at the lower levels can impact the classification performance at the higher level.",
"Employing self-attention, which has not previously been applied to this task, enables the model to learn richer, more effective utterance representations for the task.",
"As future work, we would like to experiment with other attention mechanisms such as multihead attention (Vaswani et al., 2017), directional self-attention (Shen et al., 2018a), block self-attention (Shen et al., 2018b), or hierarchical attention (Yang et al., 2016), since they have been shown to address the limitations of vanilla attention and self-attention by either incorporating information from different representation subspaces at different positions to capture both local and long-range context dependencies, encoding temporal order information, or by attending to context dependencies at different levels of granularity.",
"The authors would like to thank Dimitris Alikanio-tis, Maria Nadejde and Courtney Napoles for their insightful discussions, and the anonymous reviewers for their helpful comments."
] | [
"abstain",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"result",
"other"
] |
[
"The goal of dialogue state tracking (DST) is to predict the current dialogue state given all previous dialogue contexts.",
"Existing approaches generally predict the dialogue state at every turn from scratch.",
"However, the overwhelming majority of the slots in each turn should simply inherit the slot values from the previous turn.",
"Therefore, the mechanism of treating slots equally in each turn not only is inefficient but also may lead to additional errors because of the redundant slot value generation.",
"To address this problem, we devise the two-stage DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based on the dialogue history.",
"The Dual Slot Selector determines each slot whether to update slot value or to inherit the slot value from the previous turn from two aspects: (1) if there is a strong relationship between it and the current turn dialogue utterances; (2) if a slot value with high reliability can be obtained for it through the current turn dialogue.",
"The slots selected to be updated are permitted to enter the Slot Value Generator to update values by a hybrid method, while the other slots directly inherit the values from the previous turn.",
"Empirical results show that our method achieves 56.93%, 60.73%, and 58.04% joint accuracy on MultiWOZ 2.0, MultiWOZ 2.1, and MultiWOZ 2.2 datasets respectively and achieves a new state-of-the-art performance with signifi-cant improvements.",
"1 1 Introduction Task-oriented dialogue has attracted increasing attention in both the research and industry communities.",
"As a key component in task-oriented dialogue systems, Dialogue State Tracking (DST) aims to Corresponding author.",
"Code is available at https://github.com/guojinyu88/DSSDST",
"extract user goals or intents and represent them as a compact dialogue state in the form of slot-value pairs of each turn dialogue.",
"DST is an essential part of dialogue management in task-oriented dialogue systems, where the next dialogue system action is selected based on the current dialogue state.",
"Early dialogue state tracking approaches extract value for each slot predefined in a single domain (Williams et al., 2014; Henderson et al., 2014a,b).",
"These methods can be directly adapted to multi-domain conversations by replacing slots in a single domain with domain-slot pairs predefined.",
"In multi-domain DST, some of the previous works study the scalability of the model (Wu et al., 2019), some aim to fully utilizing the dialogue history and context (Shan et al., 2020; Chen et al., 2020a; Quan and Xiong, 2020), and some attempt to explore the relationship between different slots (Hu et al., 2020; Chen et al., 2020b).",
"Nevertheless, existing approaches generally predict the dialogue state at every turn from scratch.",
"The overwhelming majority of the slots in each turn should simply inherit the slot values from the previous turn.",
"Therefore, the mechanism of treating slots equally in each turn not only is inefficient but also may lead to additional errors because of the redundant slot value generation.",
"To address this problem, we propose a DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based on the dialogue history.",
"At each turn, all slots are judged by the Dual Slot Selector first, and only the selected slots are permitted to enter the Slot Value Generator to update their slot value, while the other slots directly inherit the slot value from the previous turn.",
"The Dual Slot Selector is a two-stage judging process.",
"It consists of a Preliminary Selector and an Ultimate Selector, which jointly make a judgment for each slot according to the current turn dialogue.",
"The intuition behind this design is that the Preliminary Selector makes a coarse judgment to exclude most of the irrelevant slots, and then the Ultimate Selector makes an intensive judgment for the slots selected by the Preliminary Selector and combines its confidence with the confidence of the Preliminary Selector to yield the final decision.",
"Specifically, the Preliminary Selector briefly touches on the relationship of current turn dialogue utterances and each slot.",
"Then the Ultimate Selector obtains a temporary slot value for each slot and calculates its reliability.",
"The rationale for the Ultimate Selector is that if a slot value with high reliability can be obtained through the current turn dialogue, then the slot ought to be updated.",
"Eventually, the selected slots enter the Slot Value Generator and a hybrid way of the extractive method and the classification-based method is utilized to generate a value according to the current dialogue utterances and dialogue history.",
"Our proposed DSS-DST achieves state-of-the-art joint accuracy on three of the most actively studied datasets: MultiWOZ 2.0 (Budzianowski et al., 2018), MultiWOZ 2.1 (Eric et al., 2019), and MultiWOZ 2.2 (Zang et al., 2020) with joint accuracy of 56.93%, 60.73%, and 58.04%.",
"The results outperform the previous state-of-the-art by +2.54%, +5.43%, and +6.34%, respectively.",
"Furthermore, a series of subsequent ablation studies and analysis are conducted to demonstrate the effectiveness of the proposed method.",
"Our contributions in this paper are three folds: We devise an effective DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue and the Slot Value Generator based on the dialogue history to alleviate the redundant slot value generation.",
"We propose two complementary conditions as the base of the judgment, which signifi-cantly improves the performance of the slot selection.",
"Empirical results show that our model achieves state-of-the-art performance with sig-nificant improvements.",
"Traditional statistical dialogue state tracking models combine semantics extracted by spoken language understanding modules to predict the current dialogue state (Williams and Young, 2007; Thomson and Young, 2010; Wang and Lemon, 2013;",
"Williams, 2014) or to jointly learn speech understanding (Henderson et al., 2014c; Zilka and Ju-rcicek, 2015; Wen et al., 2017).",
"With the recent development of deep learning and representation learning, most works about DST focus on encoding dialogue context with deep neural networks and predicting a value for each possible slot (Xu and Hu, 2018; Zhong et al., 2018; Ren et al., 2018; Xie et al., 2018).",
"For multi-domain DST, slot-value pairs are extended to domain-slot-value pairs for the target (Ramadan et al., 2018; Gao et al., 2019; Wu et al., 2019; Chen et al., 2020b; Hu et al., 2020; Heck et al., 2020; Zhang et al., 2020a).",
"These models greatly improve the performance of DST, but the mechanism of treating slots equally is inefficient and may lead to additional errors.",
"SOM-DST (Kim et al., 2020) considered the dialogue state as an explicit fixed-size memory and proposed a selectively overwriting mechanism.",
"Nevertheless, it arguably has limitations because it lacks the explicit exploration of the relationship between slot selection and local dialogue information.",
"On the other hand, dialogue state tracking and machine reading comprehension (MRC) have similarities in many aspects (Gao et al., 2020).",
"In MRC task, unanswerable questions are involved, some studies pay attention to this topic with straightforward solutions.",
"(Liu et al., 2018) appended an empty word token to the context and added a simple classification layer to the reader.",
"(Hu et al., 2019) used two types of auxiliary loss to predict plausible answers and the answerability of the question.",
"(Zhang et al., 2020c) proposed a retrospective reader that integrates both sketchy and intensive reading.",
"(Zhang et al., 2020b) proposed a verifier layer to context embedding weighted by start and end distribution over the context words representations concatenated to [CLS] token representation for BERT.",
"The slot selection and the mechanism of local reliability verification in our work are inspired by the answerability prediction in machine reading comprehension.",
"Figure 1 illustrates the architecture of DSS-DST.",
"DSS-DST consists of Embedding, Dual Slot Selector, and Slot Value Generator.",
"In the task-oriented dialogue system, given a dialogue Dial = { ( U 1 , R 1 ); ( U 2 , R 2 ) . . . ; ( UT , RT ) } of T turns where U t represents user utterance and R t represents system response of turn t .",
"We define Total_ score t j Total_ score t j Ult_ score t j Ult_ score t j Pre_ score t j Pre_ score t j Embedding Preliminary Selector <0 v = v t t-1 j j v = v t t-1 j j >0 Ultimate Selector for the j -th slot < > Slot Value Generator inherit ( ) inherit ( ) inherit ( ) inherit ( ) v = v t t-1 j j v = v t t-1 j j Embedding D t B t-1 H t Preliminary Selector [SLOT] t j [SLOT] t j H t [SLOT] t j [SLOT] t j SAM Softmax y Pre_ score j t j t Pre_ score j t t j t j H t [SLOT] t j [SLOT] t j Extractor t j t j t j Classifier Ult_ score j t Ult_ score j t V j V j Ult_ score jt Ult_ score jt V j V j Ultimate Selector Slot Value Generator D t B t-1 D t-1 D t-k+1 [SLOT] t j [SLOT] t j H t Extractor t j t j ' t j ' Classifier V j V j V j V j v = tj v = tj t j t j ' t j ' tj v j v E n c o de r E n c o d e r Dual Slot Selector Figure 1: The architecture of the proposed DSS-DST model.",
"the dialogue state at turn t as B t = { ( S j , V jt ) | 1 j J } , where S j are the slots, V jt are the corresponding slot values, and J is the total number of such slots.",
"Following (Lee et al., 2019), we use the term slot to refer to the concatenation of a domain name and a slot name (e.g., restaurant food ).",
"We employ the representation of the previous turn dialog state B t 1 concatenated to the representation of the current turn dialogue D t as input:",
"where [CLS] is a special token added in front of every turn input.",
"Following SOM-DST (Kim et al., 2020), we denote the representation of the dialogue at turn t as D t = R t ; U t [SEP] , where R t is the system response and U t is the user utterance.",
"; is a special token used to mark the boundary between R t and U t , and [SEP] is a special token used to mark the end of a dialogue turn.",
"The representation of the dialogue state at turn t is B t = B 1 t . . . B Jt , where B jt = [SLOT] j S j V jt is the representation of the j -th slot-value pair.",
"is a special token used to mark the boundary between a slot and a value.",
"[SLOT] j is a special token that represents the aggregation information of the j -th slot-value pair.",
"We feed a pre-trained ALBERT (Lan et al., 2019) encoder with the input X t .",
"Specifically, the input text is first tokenized into subword tokens.",
"For each token, the input is the sum of the input tokens X t and the segment id embeddings.",
"For the segment id, we use 0 for the tokens that belong to B t 1 and 1 for the tokens that belong to D t .",
"The output representation of the encoder is O t R | X t | d , and h [CLS] t , h [SLOT] j t R d are the outputs that correspond to [CLS] and [SLOT] j , respectively.",
"To obtain the representation of each dialogue and state, we split the O t into H t and H Bt 1 as the output representations of the dialogue at turn t and the dialogue state at turn t 1 .",
"The Dual Slot Selector consists of a Preliminary Selector and an Ultimate Selector, which jointly make a judgment for each slot according to the current turn dialogue.",
"used as the subsequent components.",
"The slot can be regarded as a special category of questions, so inspired by the previous success of explicit attention matching between passage and question in MRC (Kadlec et al., 2016; Dhingra et al., 2017; Wang et al., 2017; Seo et al., 2016), we feed a representation H and the output representation h [SLOT] j t at turn t to the Slot-Aware Matching layer by taking the slot presentation as the attention to the representation H : SAM( H, j, t ) = softmax( H ( h [SLOT] j t ) (cid:124) ) (2) The output represents the correlation between each position of H and the j -th slot at turn t .",
"Preliminary Selector The Preliminary Selector briefly touches on the relationship of current turn dialogue utterances and each slot to make an initial judgment.",
"For the j -th slot (1 j J ) at turn t , we feed its output representation h [SLOT] j t and the dialogue representation H t to the SAM as follows: jt =SAM( H t , j, t ) (3) where jt RN 1 denotes the correlation between each position of the dialogue and the j -th slot at turn t .",
"Then we get the aggregated dialogue representation H jt RN d and passed it to a fully connected layer to get classification the j -th slot's logits y jt composed of selected ( logit sel it ) and fail ( logit fai jt ) elements as follows: H jt , m = jt , m H t , m , 0 m < N (4) y jt = softmax(FC( H jt )) (5) We calculate the difference as the Preliminary Selector score for the j -th slot at turn t : Pre score jt = logit sel jt logit fai jt , and define the set of the slot indices as U 1 ,t = { j | Pre score jt > 0 } , and its size as J 1 ,t = | U 1 ,t | .",
"In the next paragraph, the slot in U 1 ,t will be processed as the target object of the Ultimate Selector.",
"Ultimate Selector The Ultimate Selector will make the judgment on the slots in U 1 ,t .",
"The mechanism of the Ultimate Selector is to obtain a temporary slot value for the slot and calculate its reliability through the dialogue at turn t as its confidence for each slot.",
"Specifically, for the j -th slot in U 1 ,t ( 1 j J 1 ,t ), we first attempt to obtain the temporary slot value jt using the extractive method: We employ two different linear layers and feed H t as the input to obtain the representation H s t and H e t for predicting the start and end, respectively.",
"Then we feed them to the SAM with the j -th slot to obtain the correlation representation s jt and e jt as follows: H s t = W s t H t (6) H e t = W e t H t (7) s jt = SAM( H s t , j, t ) (8) e jt = SAM( H e t , j, t ) (9) The position of the maximum value in s jt and e jt will be the start and end predictions of jt : ps jt = argmax m ( s jt , m ) (10) pe jt = argmax m ( e jt , m ) (11) jt = Dial t [ps jt : pe jt ] (12) Here we define V j , the candidate value set of the j -th slot.",
"If jt belongs to V j , we calculate its proportion of all possible extracted temporary slot values and calculate the Ult score jt as the score of the j -th slot: logit span jt = exp( s jt [ps jt ] + e jt [pe jt ]) N 1 (cid:80) p 1 =0 N 1 (cid:80) p 2 = p 1 +1 exp( s jt [ p 1 ] + e jt [ p 2 ]) (13) logit null jt = exp( s jt [0] + e jt [0]) N 1 (cid:80) p 1 =0 N 1 (cid:80) p 2 = p 1 +1 exp( s jt [ p 1 ] + e jt [ p 2 ]) (14) Ult score jt = logit span jt logit null jt (15) If jt does not belong to V j , we employ the classification-based method instead to select a temporary slot value from V j .",
"Specifically, the dialogue representation H jt is passed to a fully connected layer to get the distribution of V j .",
"We choose the candidate slot value corresponding to the maximum value as the new temporary slot value jt , and calculate the distribution probability difference between jt and None as the Ult score jt : c jt = softmax(FC( H jt )) (16) max c = argmax m ( c jt , m ) (17) Ult score jt = c jt [ max c] c jt [0] (18) We choose 0 as index because V j [0] = None .",
"Threshold-based decision Following previous studies (Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019), we adopt the threshold-based decision to make the final judgment for each slot in U 1 ,t .",
"The slot-selected threshold is set and determined in our model.",
"The total score of the j -th slot is the combination of the predicted Preliminary Selector's score and the predicted Ultimate Selector's score: Total score jt = Pre score jt +(1 )Ult score jt (19) where is the weight.",
"We define the set of the slot indices as U 2 ,t = { j | Total score jt > } , and its size as J 2 ,t = | U 2 ,t | .",
"The slot in U 2 ,t will enter the Slot Value Generator to update the slot value.",
"After the judgment of the Dual Slot Selector, the slots in U 2 ,t are the final selected slots.",
"For each j -th slot in U 2 ,t , the Slot Value Generator generates a value for it.",
"Conversely, the slots that are not in U 2 ,t will inherit the slot value of the previous turn (i.e., V it = V it 1 , 1 i J J 2 ,t ).",
"For the sake of simplicity, we sketch the process as follows because this module utilizes the same hybrid way of the extractive method and the classification-based method as in the Ultimate Selector: X g t = [CLS] D t D t k +1 B t 1 (20) H g t = Embedding (X g t ) (21) g jt = Ext method ( H g t ) , 1 j J 2 ,t (22) V jt = g jt , g jt V j (23) V jt = Cls method ( H g t ) , g jt / V j (24) Significantly, the biggest difference between the Slot Value Generator and the Ultimate Selector is that the input utterances of the Slot Value Generator are the dialogues of the previous k 1 turns and the current turn, while the Ultimate Selector only utilizes the current turn dialogue as the input utterances.",
"During training, we optimize both Dual Slot Selector and Slot Value Generator.",
"(25) where y jt denotes the prediction and y jt is the target indicating whether the slot is selected.",
"Ultimate Selector The training objectives of both extractive method and classification-based method are defined as cross-entropy loss: L ext ,t = 1 J 1 ,t J 1 ,t (cid:88) j log(logit p jt ) (26) L cls ,t = 1 J 1 ,t J 1 ,t (cid:88) j |V j | (cid:88) i y c jt,i log c jt,i (27) where logit p jt is the target indicating the proportion of all possible extracted temporary slot values which is calculated according to the form of Equation 13, and y c jt,i is the target indicating the probability of candidate values.",
"We choose MultiWOZ 2.0 (Budzianowski et al., 2018), MultiWOZ 2.1 (Eric et al., 2019), and the latest MultiWOZ 2.2 (Zang et al., 2020) as our training and evaluation datasets.",
"These are the three largest publicly available multi-domain task-oriented dialogue datasets, including over 10,000 dialogues, 7 domains, and 35 domain-slot pairs.",
"MultiWOZ 2.1 fixes the previously existing annotation errors.",
"MultiWOZ 2.2 is the latest version of this dataset.",
"It identifies and fixes the annotation errors of dialogue states on MultiWOZ2.1, solves the inconsistency of state updates and the problems of ontology, and redefines the dataset by dividing all slots into two types: non-categorical and categorical.",
"In conclusion, it helps make a fair comparison between different models and will be crucial in the future research of this field.",
"Following TRADE (Wu et al., 2019), we use five domains for training, validation, and testing, including restaurant , train , hotel , taxi , attraction .",
"These domains contain 30 slots (i.e., J = 30 ).",
"We use joint accuracy and slot accuracy as evaluation metrics.",
"Joint accuracy refers to the accuracy of the dialogue state in each turn.",
"Slot accuracy only considers individual slot-level accuracy.",
"following competitive baselines: DSTreader formulates the problem of DST as an extractive QA task and extracts the value of the slots from the input as a span (Gao et al., 2019).",
"TRADE encodes the whole dialogue context and decodes the value for every slot using a copy-augmented decoder (Wu et al., 2019).",
"NADST uses a Transformer-based non-autoregressive decoder to generate the current turn dialogue state (Le et al., 2019).",
"PIN integrates an interactive encoder to jointly model the in-turn dependencies and cross-turn dependencies (Chen et al., 2020a).",
"DS-DST uses two BERT-base encoders and takes a hybrid approach (Zhang et al., 2020a).",
"SAS proposes a Dialogue State Tracker with Slot Attention and Slot Information Sharing to reduce redundant informa-tion's interference (Hu et al., 2020).",
"SOM-DST considers the dialogue state as an explicit fixed-size memory and proposes a selectively overwriting mechanism (Kim et al., 2020).",
"DST-Picklist performs matchings between candidate values and slot-context encoding by considering all slots as picklist-based slots (Zhang et al., 2020a).",
"SST proposes a schema-guided multi-domain dialogue state tracker with graph attention networks (Chen et al., 2020b).",
"TripPy extracts all values from the dialog context by three copy mechanisms (Heck et al., 2020).",
"We employ a pre-trained ALBERT-large-uncased model (Lan et al., 2019) for the encoder of each part.",
"The hidden size of the encoder d is 1024.",
"We use AdamW optimizer (Loshchilov and Hutter, 2018) and set the warmup proportion to 0.01 and L2 weight decay of 0.01.",
"We set the peak learning rate to 0.03 for the Preliminary Selector and 0.0001 for the Ultimate Selector and the Slot Value Generator, respectively.",
"The max-gradient normalization is utilized and the threshold of gradient clipping is set to 0.1.",
"We use a batch size of 8 and set the dropout (Srivastava et al., 2014) rate to 0.1.",
"In addition, we utilize word dropout (Bowman et al., 2016) by randomly replacing the input tokens with the special [UNK] token with the probability of 0.1.",
"The max sequence length for all inputs is fixed to 256.",
"We train the Preliminary Selector for 10 epochs and train the Ultimate Selector and the Slot Value Generator for 30 epochs.",
"During training the Slot Value Generator, we use the ground truth selected slots instead of the predicted ones.",
"We set k to 2, to 0.55, and to 0.",
"For all experiments, we report the mean joint accuracy over 10 different random seeds to reduce statistical errors.",
"Table 1 shows the joint accuracy and the slot accuracy of our model and other baselines on the test sets of MultiWOZ 2.0, 2.1, and 2.2.",
"As shown in the table, our model achieves state-of-the-art performance on three datasets with joint accuracy of 56.93%, 60.73%, and 58.04%, which has a sig-nificant improvement over the previous best joint accuracy.",
"Particularly, the joint accuracy on MultiWOZ 2.1 beyond 60%.",
"Despite the sparsity of experimental result on MultiWOZ 2.2, our model still leads by a large margin in the existing public models.",
"Similar to (Kim et al., 2020), our model achieves higher joint accuracy on MultiWOZ 2.1 than that on MultiWOZ 2.0.",
"For MultiWOZ 2.2, the joint accuracy of categorical slots is higher than that of non-categorical slots.",
"This is because we utilize the hybrid way of the extractive method and the classification-based method to treat categorical slots.",
"However, we can only utilize the extractive method for non-categorical slots since they have no ontology (i.e., candidate value set).",
"Pre-trained Language Model For a fair comparison, we employ different pre-trained language models with different scales as encoders for training and testing on MultiWOZ 2.1 dataset.",
"As shown in Table 2, the joint accuracy of other implemented ALBERT and BERT encoders decreases in varying degrees.",
"In particular, the joint accuracy of BERT-base-uncased decreased by 1.38%, but still outperformed the previous state-of-the-art performance on MultiWOZ 2.1.",
"The result demonstrates the effectiveness of DSS-DST.",
"the effectiveness of the Preliminary Selector and Ultimate Selector respectively, we conduct an ablation study",
"of the two slot selectors on MultiWOZ 2.1.",
"As shown in Table 3, we observe that the performance of the separate Preliminary Selector is better than that of the separate Ultimate Selector.",
"This is presumably because the Preliminary Selector is the head of the Dual Slot Selector, it is stable when it handles all slots.",
"Nevertheless, the input of the Ultimate Selector is the slots selected by the Preliminary Selector, and its function is to make a refined judgment.",
"Therefore, it will be more vulnerable when handling all the slots independently.",
"In addition, when the two selectors are removed, the performance drops drastically.",
"This demonstrates Model MultiWOZ 2.1 Our Model 60.73 Dialogue History 58.36 (-2.37) Table 4: The ablation study of the DSS-DST on the MultiWOZ 2.1 dataset with joint accuracy (%).",
"Dialogue History for the Dual Slot Selector As aforementioned, we consider that the slot selection only depends on the current turn dialogue.",
"In order to verify it, we attach the dialogue of the previous turn to the current turn dialogue as the input of the Dual Slot Selector.",
"We observe in Table 4 that the joint accuracy decreases by 2.37%, which implies the redundant information of dialogue history confuse the slot selection in the current turn.",
"Dialogue History for the Slot Value Generator We try the number from one to three for the k to observe the influence of the selected dialogue history on the Slot Value Generator.",
"As shown in Table 5, the model achieves better performance on MultiWOZ 2.1 when k = 2 , 3 than that of k = 1 .",
"Furtherly, the performance of k = 2 is better than that of k = 3 .",
"We conjecture that the dialogue history far away from the current turn is little helpful because the relevance between two sentences in dialogue is strongly related to their positions.",
"The above ablation studies show that dialogue history confuses the Dual Slot Selector, but it plays a crucial role in the Slot Value Generator.",
"This demonstrates that there are fundamental differences between the two processes, and confirms the necessity of dividing DST into these two sub-tasks.",
"We analyze the performance of the Dual Slot Selector and compare it with other previous work in MultiWOZ 2.1.",
"Here we choose the SOM-DST and list the state operations and the corresponding F1 scores as a comparison.",
"The SOM-DST sets four state operations (i.e., CARRYOVER, DELETE, DONTCARE, UPDATE), while our model clas-sifies the slots into two classes (i.e., inherit and Model MultiWOZ 2.2 Joint Cat-joint Our Model 58.04 76.32 -Extractive Method 50.01 66.15 Table 8: The ablation study of the DSS-DST on the MultiWOZ 2.2 dataset with joint accuracy (%) and joint accuracy on categorical slots.",
"update ).",
"It means that DELETE, DONTCARE, and UPDATE in SOM-DST all correspond to update in our model.",
"As shown in Table 6, our model still achieves superior performance when dealing with update slots, which contain DONTCARE, DELETE, and other difficult cases.",
"Table 7 shows the domain-specific results of our model on the latest MultiWOZ 2.2 dataset.",
"We can observe that the performance of our model in taxi domain is lower than that of the other four domains.",
"We investigate the dataset and find that all the slots in taxi domain are non-categorical slots.",
"This indicates the reason that we can only utilize the extractive method for non-categorical slots since they have no ontology.",
"Furthermore, we test the performance of using the separate classification-based method for categorical slots.",
"As illustrated in Table 8, the joint accuracy of our model and categorical slots decreased by 8.03% and 10.17%, respectively.",
"We introduce an effective two-stage DSS-DST which consists of the Dual Slot Selector based on the current turn dialogue, and the Slot Value Generator based on the dialogue history.",
"The Dual Slot Selector determines each slot whether to update or to inherit based on the two conditions.",
"The Slot Value Generator employs a hybrid method to generate new values for the slots selected to be updated according to the dialogue history.",
"Our model achieves state-of-the-art performance of 56.93%, 60.73%, and 58.04% joint accuracy with signifi-cant improvements (+2.54%, +5.43%, and +6.34%) over previous best results on MultiWOZ 2.0, MultiWOZ 2.1, and MultiWOZ 2.2 datasets, respectively.",
"The mechanism of a hybrid method is a promising research direction and we will exploit a more comprehensive and efficient hybrid method for slot value generation in the future.",
"This work was supported by the National key research and development project (2017YFB1400603) and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 61921003).",
"We thank the anonymous reviewers for their insightful comments.",
"The claims in this paper match the experimental results.",
"The model utilizes the hybrid method for slot value generation, so it is universal and scalable to unseen domains, slots, and values.",
"The experimental results can be expected to generalize."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"Neural language models are known to have a high capacity for memorization of training samples.",
"This may have serious privacy implications when training models on user content such as email correspondence.",
"Differential privacy (DP), a popular choice to train models with privacy guarantees, comes with significant costs in terms of utility degradation and disparate impact on subgroups of users.",
"In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a novel triplet-loss term.",
"We compare our methods with DP through extensive evaluation.",
"We show the advantages of our regularizers with favorable utility-privacy trade-off, faster training with the ability to tap into existing optimization approaches, and ensuring uniform treatment of under-represented subgroups.",
"Neural language models (Bengio et al., 2003; Mikolov et al., 2010) have recently seen significant gains in capabilities, and are deployed at scale in several real-world scenarios (Chen et al., 2019; Adam et al., 2020).",
"Training these models on domain-specific user data can further improve their utility.",
"The volume of data required, coupled with the inherent sparsity of natural language which often means all data are unique, opens the door to an array of privacy attacks against models and their training data.",
"Sample memorization poses a substantial risk by enabling model inversion attacks (Carlini et al., 2020; Ramaswamy et al., 2020; Inan et al., 2021).",
"In these attacks, a curious or malevolent user can query a pre-trained language model on any data record with the intention of reconstructing (parts of) training samples 1 .",
"Differential Privacy (DP) (Dwork, 2006) is the gold standard approach to address this issue, thanks to its strong and rigorous privacy guarantees.",
"DP-SGD (Abadi et al., 2016) is a popular method to train neural models with differential privacy guarantees and it works by clipping of the gradients and addition of noise in each update, which provides worst-case guarantees that reflect the likelihood of leaking any attribute of any member of the dataset into the trained model.",
"The worst-case guarantees of differential privacy are not customizable, in other words, they cannot be relaxed to protect only certain attributes.",
"Therefore, DP incurs significant loss to model utility (Tramr and Boneh, 2020).",
"DP training of models is also much slower, with cumbersome hyper-parameter tuning and development (Wu et al., 2017; Subramani et al., 2020).",
"It has also been shown that DP's utility loss is much worse for under-represented groups (Bagdasaryan et al., 2019; Farrand et al., 2020), which can have financial and societal ramifications (Pujol et al., 2020).",
"To address these issues, we relax the strong assumptions of the DP threat model and assume an adversary with finite-capacity (finite statistical, compute, and side information) who attempts to recover sensitive user-level information from the trained model (Carlini et al., 2019).",
"We propose two privacy regularization methods, one based on adversarial training and another on a novel privacy loss term, to jointly optimize for privacy and utility of language models.",
"The main idea of our regularizers is to prevent the last hidden state representation of the language model for an input sequence x from being linked back to the sensitive attribute we are trying to protect, in our case, the identity of the author.",
"We use the last hidden state as it corresponds to the embedding of the sequence x .",
"2 We 2 Although we consider recurrent neural network-based language models in this work, our approach is applicable in transformer-based language models as well.",
"consider the linkability of the input representation to the sensitive attribute (author) as a proxy since it is commensurate with the linked and linkable information definitions in the General Data Protection Regulation (GDPR Article 29 Working Party, 2014).",
"By framing privacy as an optimization problem, we can apply the well-developed machinery of large-scale gradient-based optimization, enabling us to train models at scale while jointly tuning for an optimal privacy-utility trade-off.",
"To validate our approach, we develop an evaluation framework for assessing a model's privacy loss.",
"We employ the exposure metric introduced in (Carlini et al., 2019) and introduce a reconstruction (tab) attack as a realistic scenario to evaluate and compare LSTM language models trained using our regularization with those trained with differential privacy, on Avocado (Oard et al., 2015) and Reddit (Vlske et al., 2017) datasets.",
"We also empirically demonstrate that, unlike DP, our technique does not have disparate impacts on underrepresented groups.",
"Our work is closely related to (Coavoux et al., 2018) and (Li et al., 2018).",
"Coavoux et al. consider an attacker who eavesdrops on the hidden representations of a pre-trained model during inference and tries to recover information about the input text.",
"Adversarial training is used as a mitigation to reduce the attacker's performance (Wang et al., 2019).",
"Li et al. use adversarial training to protect private author attributes such as age or gender, in learned text representations for part-of-speech tagging and sentiment analysis to gain better performance on out-of-domain corpora.",
"We, on the other hand, use adversarial training and a triplet-based regularization to train private language models that do not memorize sensitive user information, which has not been explored before.",
"We evaluate our models accordingly, by trying to extract training samples.",
"Prior work has studied membership inference attacks against models (Shokri et al., 2017; Yeom can consider the representation corresponding to the special token [CLS] as the embedding of the sequence x . et al., 2018; Song and Shmatikov, 2019), however, our regularizations do not target these attacks.",
"Figure 1 shows our first proposed regularizer which is adversarial in nature.",
"We feed an input text sequence x to the language model and extract the last hidden state representation of the model for x ; denoted by h x .",
"h x is then fed to a discriminator parameterized by d , which plays the role of an attacker who attempts to predict what the sensitive label (in our case, the author, y ) for x is.",
"The output probability distribution of the discriminator for the input h x , p d = Pr( | h x ; d ) is then used to compute both the privacy loss LLM-P of the language model and the discriminator loss LD-CE .",
"During training, the discriminator optimizes for better linking of the last hidden state representations to the authors.",
"Thus, the discriminator loss is LD-CE ( h x , y ; d ) = log Pr( y | h x ; d ) .",
"Conversely, the language model optimizes lm such that it (1) improves the utility of the language model and (2) flattens the probability distribution over authors.",
"Thus, we devise the following loss function: LLM ( x ; d , lm )= LLM-CE + LLM-P (1) LLM-CE is the utility loss, for which we use conventional cross entropy loss over the next-word predictions.",
"LLM-P is the privacy loss : LLM-P ( h x ; d ) = 1 MM (cid:88) c =1 log Pr( c | h x ; d ) (2) i.e. the KL divergence between the distribution over authors and the uniform distribution where M is the number of classes (authors).",
"The goal of this term is to drive the discriminator to predict randomly uniform outputs (Raval et al., 2019).",
"The reason we devised this loss as opposed to using 0 50 100 150 200 0 10 20 30 40 50 E x p o s u r e Canary Repetition Unmitigated DP Triplet Adversarial",
"L D-CE is that we do not just want the discriminator to assign zero probability to the correct author, we want p d to be uniform so that it has no information about the correct author.",
"Hyperparameter allows for trading off privacy and utility.",
"One potential downside of the proposed adversarial regularizer is that the capacity of the discriminator must scale with the number of authors, and thus the size of the training data.",
"To better accommodate the larger number of authors in large datasets, we investigate another regularizer that does not require a discriminator.",
"We build on the intuition that to obfuscate an attribute, we can increase the distance between representations of samples that have the same label for that attribute while decreasing the distance between samples with different labels.",
"To this end, we use the language model loss ( LLM ) of the previous section (Eq 1), and we set the privacy loss to be the triplet loss: LLM-P = (cid:107) h x h p (cid:107) 2 (cid:107) h x h n (cid:107) 2 (3) The triplet loss is commonly used in vision tasks for training embeddings that map images from the same category to neighboring points in the embedding space (Chechik et al., 2010).",
"We, however, invert this loss and use it for an opposite purpose: privacy regularization.",
"During the training of the language model, we select a baseline sample, x , a positive sample p (with different sensitive label) and a negative sample n (with the same sensitive label) and feed them through the language model and extract the last hidden states h x , h p and h n , respectively.",
"We find the l 2 distance between h x , h p , and h n and based on their labels, add them to or subtract them from the loss.",
"To implement this, in practice, we sample a baseline batch and a second auxiliary batch during training.",
"We feed both the baseline batch ( x ) and the auxiliary batch ( a ) through the language model and extract the last hidden states.",
"We then calculate the distance between the last hidden states of the corresponding samples in the two batches.",
"If the samples have different labels for the sensitive attribute (author), we add their distance to the loss, otherwise, we subtract it.",
"The privacy loss becomes: LLM-P = (cid:88) i : y xi = y ai (cid:107) h x i h a i (cid:107) 2 (cid:88) j : y xj (cid:54) = y aj (cid:107) h x j h a j (cid:107) 2 (4) 3 Evaluation In our experiments, we use a subset of the Avocado corporate email dataset (Oard et al., 2015) with 100 users and 60,000 samples and a subset of Reddit dataset (Vlske et al., 2017) with 10,000 users and 3 million samples.",
"Both of these datasets are in English, covering formal and informal writing.",
"We create a 80 / 20% training/test set split.",
"We use a two-layer LSTM model as the language model for the next-word prediction task.",
"We compare models trained with our proposed regularizer to differentially private (DP) ones (Abadi et al., 2016).",
"For the privacy accounting, we use Gaussian differential privacy (Bu et al., 2019).",
"We use language model perplexity as a measure of utility.",
"Due to space limitations, we focus evaluations on privacy metrics for several set levels of achieved test perplexity, listed in Table 1 in the appendix.",
"See appendix A.2 for a more detailed description of the experimental setup and extra analysis of overheads and complexity of each regularizer.",
"Privacy measurements w/ exposure metric.",
"To empirically compare the privacy of our methods to that of DP, we adopt the exposure metric introduced in (Carlini et al., 2019).",
"The higher the exposure of a sequence, the more the model's memorization and the easier it is to extract the sequence from the language model.",
"To measure exposure we insert sequences of five random words (canaries) to the training data (appendix A.5).",
"We insert unique 0% 1% 2% 3% 4% 5% Unmitigated DP Triplet Adversarial A tt a c k A cc u r a c y Synthesized Canary Real Canary",
"Figure 2 shows the exposure results per canary repetition.",
"These results are averaged over all the users.",
"In each sub-figure, the perplexities of the models are similar, hence we can compare the privacy levels at similar utilities.",
"Fig. 2a compares trained models using different techniques on the Avocado dataset, where they all have relatively high perplexities compared to a fully trained conventional model (Table 1).",
"Fig. 2b has the same setup, however, the models have lower perplexities.",
"Naturally, for having better utility we are trading off privacy, which can be seen by comparing the exposure values in these two figures and observing that the second one has higher exposure values (lower privacy).",
"Finally, Fig. 2c shows the exposure results for Reddit.",
"In all cases, we see that the unmitigated model has the highest exposure, as expected.",
"We also observe that for canaries (patterns) that are repeated more than 9 times (for each user), our mitigation offers lower exposure compared to DP, especially in the high perplexity case.",
"This is because clipping and noise addition in DP is attribute and data-agnostic, meaning that noise is added to all samples regardless of whether or not they contain sensitive information.",
"Therefore, repeated patterns are less protected.",
"If we want to protect a pattern with n repetitions, we would need to apply noise that is n larger, which would degrade the utility gravely and would not yield the same perplexity.",
"For lower repetition canaries, our mitigations have comparable performance to DP.",
"For all these experiments the Gaussian differential privacy criterion is extremely large ( 10 20 ), which practically yields (cid:15) .",
"We also experimented with lower (cid:15) values (e.g. (cid:15) 7 ), however, it yields a model with perplexity of 650, having an extremely low utility.",
"and see if the entire sequence is reconstructed using the language model.",
"We report the rate of correct reconstruction of canaries as the accuracy of the attack.",
"We use the synthetic canaries from the previous experiment, and also select real canaries from the training corpus to create a real-world scenario.",
"Fig. 3a shows that for a high perplexity model, the accuracy of the tab attack on synthesized canaries is very small, even for the unmitigated model.",
"The unmitigated model reaches the designated perplexity in less than an epoch, and hence it does not memorize the canaries.",
"For the real canaries, however, the memorization is higher, since they follow grammatical rules.",
"In the lower perplexity case of Fig. 3b, we see that the synthesized canaries are mostly memorized by the unmitigated model.",
"Our mitigations outperform DP, especially for the synthesized canaries.",
"DP is not context-sensitive and applies the same amount of noise to all samples, thereby leaving correlated and higher repeated samples less-protected.",
"Our mitigations, however, learn what sequences are link-able to their authors, and obfuscate them such that they no longer leak the identifying secret.",
"Effect on under-represented users.",
"Differential privacy has disparate impact on the accuracy of different subgroups of the dataset (Bagdasaryan et al., 2019).",
"Here, we want to measure the effect of our mitigations on the utility of the model among users with various data samples.",
"For each user, we measure the average perplexity of the model for their samples in the test set, and then subtract this from the same value for an unmitigated model.",
"This would yield the average drop in utility, per user.",
"We compare the utility drop of well-represented users to under-represented ones by taking the top 5 users with the most samples and the bottom 5 users with the fewest samples from Avocado dataset.",
"We then measure the average utility drop over each group of 5 users on the test set.",
"Figure 3c shows these results.",
"We see that differential privacy has disparate impact, 29 points, on the two sub-groups of users (authors), whereas this gap is only 7 points for models trained with our mitigations.",
"It's important to remember that in general, distinguishing under-represented users from those whose data is similar to others but who have contributed fewer samples is a difficult task.",
"However, for Figure 3c's results, if these users' data came from the same distribution as the ones with lots of samples (i.e. if these people were merely less-contributing), the utility loss would be similar for all groups when applying user-level DP (what we use).",
"DP's disparate impact on the utility loss for these two groups suggests that, in our case, the less-contributing authors are probably also underrepresented.",
"This work introduces two privacy mitigation methods to jointly optimize for privacy and utility.",
"Extensive experiments show that our approach provides comparable and in certain cases a higher level of privacy compared to differentially private model training.",
"We further empirically demonstrate, that our methods do not exhibit disparate impacts on under-represented groups and have significantly less overhead on training performance.",
"The Avocado corpus is licensed for research applications under strict terms intended to protect the privacy of the correspondents.",
"While the end-user license agreement does not indicate what consent was granted by the participants, one term of the license is that End user will obtain whatever training and approval is required by their organization for working with human subjects data, which we have obtained (more details in (Oard et al., 2015)).",
"While handling sensitive email data (Avocado) we made sure to abide by the terms of its end-user license agreement (EULA) which has provisions to protect the privacy of members of the corpus.",
"Furthermore, we took measures such as scrubbing named entities before using the data for model training.",
"The over-arching goal of our work is to contribute to language model development that protects the privacy rights of users who contribute their data.",
"While we rigorously evaluated our models by applying state-of-the-art attacks, deploying these models in real-world setups requires further verification that users' privacy is preserved.",
"The authors would like to thank the anonymous reviewers and meta-reviewers for their helpful feedback.",
"We also thank Peter Kairouz and Mohammadkazem Taram for insightful discussions.",
"Additionally, we thank our MSR colleagues and UCSD Berg Lab for their helpful comments and feedback."
] | [
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Finding codes given natural language query is beneficial to the productivity of software developers.",
"Future progress towards better semantic matching between query and code requires richer supervised training resources.",
"To remedy this, we introduce the CoSQA dataset.",
"It includes 20,604 labels for pairs of natural language queries and codes, each annotated by at least 3 human annotators.",
"We further introduce a contrastive learning method dubbed CoCLR to enhance query-code matching, which works as a data augmenter to bring more artificially generated training instances.",
"We show that evaluated on CodeXGLUE with the same CodeBERT model, training on CoSQA improves the accuracy of code question answering by 5.1%, and incorporating CoCLR brings a further improvement of 10.5%.",
"1 .",
"With the growing population of software developers, natural language code search, which improves the productivity of the development process via retrieving semantically relevant code given natural language queries, is increasingly important in both communities of software engineering and natural language processing (Allamanis et al., 2018; Liu et al., 2020a).",
"The key challenge is how to effectively measure the semantic similarity between a natural language query and a code.",
"There are recent attempts to utilize deep neural networks (Gu et al., 2018; Wan et al., 2019; Feng et al., 2020), which embed query and code as dense vectors to perform semantic matching in a unified vector space.",
"However, these models are Work done during internship at Microsoft Research Asia.",
"1 The CoSQA data and leaderboard are available at https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/NL-code-search-WebQuery.",
"The code is available at https://github.com/Jun-jie-Huang/CoCLR python check if path is absolute path or relative path Query:Code: def is_relative_url ( url ): \"\"\"simple method to determine if a url is relative or absolute\"\"\" if url . startswith ( \"#\" ): return None if url . find ( \"://\" )> 0 or url . startswith ( \"//\" ): # either 'http(s)://...' or '//cdn...' and therefore absolute return False return True Label: 1 capitalize letters in string python Query: Code: def snake_to_camel ( s : str ) -> str : \"\"\" Convert string from snake case to camel case. \"\"\" fragments = s . split ( '_' ) return fragments [ 0 ] + '' . join ( x . title ()for x in fragments [ 1 :]) Label: 0 Example 2: Example 1: Figure 1: Two examples in CoSQA. A pair of a web query and a Python function with documentation is annotated with 1 or 0, representing whether the code answers the query or not. mostly trained on pseudo datasets in which a natural language query is either the documentation of a function or a tedious question from Stack Overflow. Such pseudo queries do not reflect the distribution of real user queries that are frequently issued in search engines. To the best of our knowledge, datasets that contain real user web queries include Lv et al. (2015), CodeSearchNet Challenge (Hu-sain et al., 2019), and CodeXGLUE 2 (Lu et al., 2021). These three datasets only have 34, 99, and 1,046 queries, respectively, for model testing. The area lacks a dataset with a large amount of real user queries to support the learning of statistical models like deep neural networks for matching the semantics between natural language web query and code. To address the aforementioned problems, we introduce CoSQA, a dataset with 20,604 pairs of web queries and code for Co de S earch and Q uestion A nswering, each with a label indicating whether 2 https://github.com/microsoft/CodeXGLUE Dataset Size Natural Language Code human-annotated ? CodeSearchNet (Husain et al., 2019) 2.3M Documentation Function No Gu et al. (2018) 18.2M Documentation Function No Miceli Barone and Sennrich (2017) 150.4K Documentation Function No StaQC (manual) (Yao et al., 2018) 8.5K Stack Overflow question Code block Yes StaQC (auto) (Yao et al., 2018) 268K Stack Overflow question Code block No CoNaLa (manual) (Yin et al., 2018) 2.9K Stack Overflow question Statements Yes CoNaLa (auto) (Yin et al., 2018) 598.2K Stack Overflow question Statements No SO-DS (Heyman and Cutsem, 2020) 12.1K Stack Overflow question Code block No Nie et al. (2016) 312.9K Stack Overflow question Code block No Li et al. (2019) 287 Stack Overflow question Function Yes Yan et al. (2020) 52 Stack Overflow question Function Yes Lv et al. (2015) 34 Web query Function Yes CodeSearchNet (Husain et al., 2019) 99 Web query Function Yes CodeXGLUE WebQueryTest 2 1K Web query Function Yes CoSQA (ours) 20.6K Web query Function Yes Table 1: Overview of existing datasets on code search and code question answering. Some datasets containing both unlabelled data and labelled data are listed in separate lines. the code can answer the query or not. The queries come from the search logs of the Microsoft Bing search engine, and the code is a function from GitHub 3 . To scale up the annotation process on such a professional task, we elaborately curate potential positive candidate pairs and perform large scale annotation where each pair is annotated by at least three crowd-sourcing workers. Furthermore, to better leverage the CoSQA dataset for query-code matching, we propose a code contrastive learning method (CoCLR) to produce more artificially generated instances for training. We perform experiments on the task of query-code matching on two tasks: code question answering and code search. On code question answering, we find that the performance of the same CodeBERT model improves 5.1% after training on the CoSQA dataset, and further boosts 10.5% after incorporating our CoCLR method. Moreover, experiments on code search also demonstrate similar results. 2 Related Work In this part, we describe existing datasets and methods on code search and code question answering. 2.1 Datasets A number of open-sourced datasets with a large amount of text-code pairs have been proposed for the purposes of code search (Husain et al., 2019; Gu et al., 2018; Nie et al., 2016) and code question answering (Yao et al., 2018; Yin et al., 2018; 3 We study on Python in this work, and we plan to extend to more programming languages in the future. Heyman and Cutsem, 2020). There are also high-quality but small scale testing sets curated for code search evaluation (Li et al., 2019; Yan et al., 2020; Lv et al., 2015). Husain et al. (2019), Gu et al. (2018) and Miceli Barone and Sennrich (2017) collect large-scale unlabelled text-code pairs by leveraging human-leaved comments in code functions from GitHub. Yao et al. (2018) and Yin et al. (2018) automatically mine massive code answers for Stack Overflow questions with a model trained on a human-annotated dataset. Nie et al. (2016) extract the Stack Overflow questions and answers with most likes to form text-code pairs. Among all text-code datasets, only those in Lv et al. (2015), CodeSearchNet Challenge (Husain et al., 2019) and CodeXGLUE 2 contain real user web queries, but they only have 34, 99 and 1,046 queries for testing and do not support training data-driven models. Table 1 illustrates an overview of these datasets. 2.2 Code Search Models Models for code search mainly can be divided into two categories: information retrieval based models and deep learning based models. Information retrieval based models match keywords in the query with code sequence (Bajracharya et al., 2006; Liu et al., 2020b). Keyword extension by query expansion and reformulation is an effective way to enhance the performance (Lv et al., 2015; Lu et al., 2015; Nie et al., 2016; Rahman et al., 2019; Rahman, 2019). deep learning based models encode query and code into vectors and utilize vector similarities as the metric to retrieve code (Sachdev et al., 2018; Ye et al., 2016; Gu et al., 2018; Cambronero et al., 2019; Yao et al., 2019; Liu et al., 2019a; Feng et al., 2020; Zhao and Sun, 2020). There are also ways to exploit code structures to learn better representations for code search (Wan et al., 2019; Haldar et al., 2020; Guo et al., 2020). 3 CoSQA Dataset In this section, we introduce the construction of the CoSQA dataset. We study Python in this work, and we plan to extend to more programming languages in the future. Each instance in CoSQA is a pair of natural language query and code, which is annotated with 1 or 0 to indicate whether the code can answer the query. We first describe how to curate web queries, obtain code functions, and get candidate query-code pairs. After that, we present the annotation guidelines and statistics. 3.1 Data Collection Query Curation We use the search logs from the Microsoft Bing search engine as the source of queries. Queries without the keyword python are removed. Based on our observation and previous work (Yao et al., 2018; Yan et al., 2020), there are seven basic categories of code-related web queries, including: (1) code searching, (2) debugging, (3) conceptual queries, (4) tools usage, (5) programming knowledge, (6) vague queries and (7) others. Basically, queries in (2)-(7) categories are not likely to be answered only by a code function, since they may need abstract and general explanations in natural language. Therefore, we only target the first category of web queries that have code searching intent, i.e., queries that can be answered by a piece of code. To filter out queries without code searching intent, we manually design heuristic rules based on exact keyword matching. For example, queries with the word of benefit or difference are likely to seek a conceptual comparison rather than a code function, so we remove all queries with such keywords. Based on the observations, we manually collect more than 100 keywords in total. Table 2 displays a part of selected keywords used for removing unqualified queries and more details can be found in Appendix A. To evaluate the query filtering algorithm, we construct a human-annotated testset. We invite three experienced python programmers to label 250 randomly sampled web queries with a binary label of having/not having searching intent. Then we evaluate the accuracy of intent predictions Categories Some Keywords Debugging exception, index out of, ignore, stderr, ... ConceptualQueries vs, versus, difference, advantage, benefit, drawback, how many, what if, why, ... ProgrammingKnowledge tutorial, advice, argument, suggestion, statement, declaration, operator, ... ToolsUsage console, terminal, open python, studio, ide, ipython, jupyter, vscode, vim, ... Others unicode, python command, @, (), ... Table 2: Selected keywords for our heuristic rules to filter out web queries without code search intent in five categories. Vague queries are morphologically variable, so we ignore this category. given keyword-based rules and those given by humans. We find the F1 score achieves 67.65, and the accuracy is up to 82.40. This demonstrates the remarkable effectiveness of our rule-based query filtering algorithm. Code Collection The selection of code format is another important issue in constructing query-code matching dataset, which includes a statement (Yin et al., 2018), a code snippet/block (Yao et al., 2018), a function (Husain et al., 2019), etc. In CoSQA, we simplify the task and adopt a compete Python function with paired documentation to be the answer to the query for the following reasons. First, it is complete and independent in functionality which may be more prone to answering a query. Second, it is syntactically correct and formally consistent which enables parsing syntax structures for advanced query-code matching. Additionally, a complete code function is often accompanied with documentation wrote by programmers to help understand its functionality and usage, which is beneficial for query-code matching (see Section 6.4 for more details). We take the CodeSearchNet Corpus (Husain et al., 2019) as the source for code functions, which is a large-scale open-sourced code corpus allowing modification and redistribution. The corpus contains 2.3 million functions with documentation and 4.1 million functions without documentation from public GitHub repositories spanning six programming languages (Go, Java, JavaScript, PHP, Python, and Ruby). In CoSQA, we only keep complete Python functions with documentation and remove those with non-English documentation or special tokens (e.g. (cid:104) img... (cid:105) or http : // ). Query Code Explanations (1) boolean function to check if variable is a string python def is_string ( val ): \"\"\" Determines whether the passed value is a string, safe for 2/3. \"\"\" try: basestring except NameError: return isinstance( val , str) return isinstance( val , basestring ) Code can fully satisfy the demand of the query. Therefore the code is a correct answer. (2) python check if argument is list def is_listish ( obj ): \"\"\" Check if something quacks like a list. \"\"\" if isinstance( obj , (list, tuple, set)): return True return is_sequence ( obj ) Code meets the demand of checking list type, and the tuple and set types, which exceeds query's demand. It is a correct answer. (3). python measure distance between 2 points def vector_distance ( a , b ): \"\"\" The Euclidean distance between two vectors. \"\"\" a = np . array ( a ) b = np . array ( b ) return np . linalg . norm ( a b ) Code computes Euclidean distance, which is one category of vector distances. So it is correct. (4) python measure distance between 2 points def dist_sq ( self , other ): \"\"\" Distance squared to some other point. \"\"\" dx = self . x other . x dy = self . y other . y return dx ** 2 + dy ** 2 Code computes square distance, which is another category of vector distances. (5) read write in the same file python def file_read ( filename ): \"\"\" Read a file and close it. Returns the file source. \"\"\" fobj = open( filename , 'r' ); source = fobj . read (); fobj . close () return source Query asks for reading and writing, but code only implements reading. The code satisfies 50% of the demands and is not a correct answer. (6) python get the value in the list starting with the str def get_list_index ( lst , index_or_name ): \"\"\" Return the index of an element in the list. Args: lst (list): The list. index_or_name (int or str): Value of the reference element, or directly its numeric index. Returns: (int) Index of the element in the list. \"\"\" if isinstance( index_or_name , six . integer_types ): return index_or_name return lst . index ( index_or_name ) The query is looking for an element in the list that starts with a specific str , but the code does not have the function of starting with the str , and it returns index instead of value. There are two unsatisfied areas, which is less than 50%. (7) python check if something is an array def is_number ( obj ): \"\"\" Check if obj is number. \"\"\" return isinstance( obj ,(int,float, np . int_ , np . float_ )) A small part of code is relevent to the query but is can not answer.",
"Candidate Query-code Pairs Obviously, it is not possible to annotate all query-code pairs.",
"To improve efficiency, we wipe off low-confidence instances before annotation.",
"Specifically, we employ a CodeBERT-based matching model (Feng et al., 2020) to retrieve high-confidence codes for every query.",
"The CodeBERT encoder is fine-tuned on 148K automated-minded Python Stack Overflow question-code pairs (StaQC) (Yao et al., 2018) with the default parameters.",
"A cosine similarity score on the pooled [ CLS ] embeddings of query and code is computed to measure the relatedness.",
"To guarantee the quality of candidates, we automatically remove low-quality query-code pairs according to the following evaluation metrics.",
"To ensure the code may answer the query, we only keep the code with the highest similarity to the query and remove the pairs with a similarity below 0.5.",
"Annotating such a domain-specific dataset is dif-ficult since it requires the knowledge of Python.",
"Even experienced programmers do not necessarily understand all code snippets.",
"To ensure the feasibility and control annotation quality, we design comprehensive annotation guidelines and take a two-step annotation procedure.",
"Annotation Guidelines Our annotation guideline is developed through several pilots and further updated with hard cases as the annotation progresses.",
"Annotation participants are asked to make a two-step judgment for each instance: intent annotation and answer annotation.",
"In the first step of intent annotation , annotators are asked to judge whether the query has the intent to search for a code.",
"They will skip the second step if the query is without code search intent.",
"As shown in Section 3.1, vague queries are hard to be filtered out by our heuristic intent filtering algorithm.",
"Therefore, it is necessary to take this step to remove such queries so that we can focus more on the matching between query and code rather than query discrimination.",
"In the second step of answer annotation , annotators are asked to judge whether the code can answer the query.",
"They should label the instance with 1 if the code is a correct answer ; otherwise, it is labeled 0.",
"In this step, judgment should be made after comprehensively considering the relevance between query with documentation, query with function header, and query with function body.",
"During annotation, it is often the case that a code function can completely answer the query, which means that the code can satisfy all the demands in the query and it is a correct answer.",
"(Case (1) in Table 3.) But more often, the code can not completely answer the query.",
"It may exceed, partially meet or even totally dissatisfy the demands of the query.",
"Therefore we divide such situations into four categories and give explanations and examples (Table 3) for each category: If code can answer the query and even exceed the demand of the query, it is a correct answer.",
"(Case (2) in Table 3.) If code can meet a certain category of the query demands, it is also a correct answer.",
"(Case (3) and Case (4) in Table 3.) If code satisfies no more than 50% of the query demands, the code can not correctly answer the query.",
"(Case (5) and Case (6) in Table 3.) If a small part of the code is relevant to the query, the code can not be a correct answer.",
"(Case (7) in Table 3.) Annotation We ask more than 100 participants, who all have a good grasp of programming knowledge, to judge the instances according to the annotation guideline.",
"Participants are provided with the full guidelines and allowed to discuss and search on the internet during annotation.",
"When annotation is finished, each query-code pair has been annotated by at least three participants.",
"We remove the pairs whose inter-annotator agreement (IAA) is poor, where Krippendorff's alpha coefficient (Krip-pendorff, 1980) is used to measure IAA.",
"We also remove pairs with no-code-search-intent queries.",
"Finally, 20,604 labels for pairs of web query and code are retained, and their average Krippendorff's alpha coefficient is 0.63.",
"Table 4 shows the statistics of CoSQA.",
"Based on our CoSQA dataset, we explore two tasks to study the problem of query-code matching: code search and code question answering.",
"The first task is natural language code search, where we formulate it as a text retrieval problem.",
"Given a query q i and a collection of codes C = { c 1 , . . . , c H } as the input, the task is to find the most possible code answer c .",
"The task is evaluated by Mean Reciprocal Rank (MRR).",
"The second task is code question answering, where we formulate it as a binary classification problem.",
"Given a natural language query q and a code sequence c as the input, the task of code question answering predicts a label of 1 or 0 to indicate whether code c answers query q or not.",
"The task is evaluated by accuracy score.",
"In this section, we first describe the model for query-code matching and then present our code contrastive learning method (CoCLR) to augment more training instances.",
"The base model we use in this work is a siamese network, which is a kind of neural network with two or more identical subnetworks that have the same architecture and share the same parameters and weights (Bromley et al., 1994).",
"By deriving fixed-sized embeddings and computing similarities, siamese network systems have proven effective in modeling the relationship between two text sequences (Conneau et al., 2017; Yang et al., 2018; Reimers and Gurevych, 2019).",
"We use a pretrained CodeBERT (Feng et al., 2020) as the encoder to map any text sequence to a d -dimensional real-valued vectors.",
"CodeBERT is a bimodal model for natural language and programming language which enables high-quality text and CodeBERT CodeBERT CodeBERT rewrite CodeBERT CodeBERT 1 1 CodeBERT 1 rewrite CodeBERT CodeBERT Figure 2: The frameworks of the siamese network with CodeBERT (left) and our CoCLR method (right).",
"code embeddings to be derived.",
"Specifically, it shares exactly the same architecture as RoBERTa (Liu et al., 2019b), which is a bidirectional Transformer with 12 layers, 768 dimensional hidden states, and 12 attention heads, and is repretrained by masked language modeling and replaced token detection objectives on CodeSearchNet corpus (Hu-sain et al., 2019).",
"For each query q i and code c i , we concatenate a [ CLS ] token in front of the sequence and a [ SEP ] token at the end.",
"Then we feed the query and code sequences into the CodeBERT encoder to obtain contextualized embeddings, respectively.",
"Here we use the pooled output of [ CLS ] token as the representations: q i = CodeBERT ( q i ) , c i = CodeBERT ( c i ) .",
"(1) Next we perform query-code matching through a multi-layer perceptron.",
"Following Chen et al. (2017) and Mou et al. (2016), we concatenate the query embedding q i and code embedding c i with the element-wise difference q i c i and element-wise product q i (cid:74) c i , followed by a 1-layer feed-forward neural network, to obtain a relation embedding: r ( i,i ) = tanh( W 1 [ q i , c i , q i c i , q i (cid:75) c i ]) .",
"We expect such an operation can help sharpen the cross information between query and code to capture better matching relationships such as contradiction.",
"Then we put the relation embedding r ( i,i ) into a final 1-layer perceptron classifier with a sigmoid output layer: s ( i,i ) = sigmoid ( W 2 r ( i,i ) ) .",
"Score s ( i,i ) can be viewed as the similarity of query q i and code c i .",
"L b = [ y i log s ( i,i ) + (1 y i ) log(1 s ( i,i ) )] , (3)",
"Now we incorporate code contrastive learning into the siamese network with CodeBERT.",
"Contrastive learning aims to learn representations by enforcing similar objects to be closer while keeping dissimilar objects further apart.",
"It is often accompanied with leveraging task-specific inductive bias to augment similar and dissimilar examples.",
"In this work, given an example of query and code ( q i , c i ) , we define our contrastive learning task on example itself, in-batch augmented examples ( q i , c j ) , and augmented example with rewritten query ( q (cid:48) i , c i ) .",
"Hence, the overall training objective can be formulated as: L = L b + L ib + L qr .",
"In-Batch Augmentation (IBA) A straightforward augmentation method is to use in-batch data, where a query and a randomly sampled code are considered as dissimilar and forced away by the models.",
"Specifically, we randomly sample n examples { ( q 1 , c 1 ) , ( q 2 , c 2 ) , . . . , ( q n , c n ) } from a mini-batch.",
"For ( q i , c i ) , we pair query q i with the other N 1 codes within the mini-batch and treat the N 1 pairs as dissimilar.",
"Let s ( i,j ) denote the similarity of query q i and code c j , the loss function of the example with IBA is defined as: L ib = 1 n 1 n (cid:88) j = 1 j (cid:54) = i log(1 s ( i,j ) ) , (5) Query-Rewritten Augmentation (QRA) The in-batch augmentation only creates dissimilar pairs from the mini-batch, which ignores to augment similar pairs for learning positive relations.",
"To remedy this, we propose to augment positive examples by rewriting queries.",
"Inspired by the feature that web queries are often brief and not necessarily grammatically correct, we assume that the rewritten query with minor modifications shares the same semantics as the original one.",
"Therefore, an augmented pair with a rewritten query from a positive pair can also be treated as positive.",
"Specifically, given a pair of query q i and code c i with y i = 1 , we rewrite q i into q (cid:48) i in one of the three ways: randomly deleting a word, randomly switching the position of two words, and randomly copying a word.",
"As shown in Section 6.3, switching position best helps increase the performance.",
"For any augmented positive examples, we also apply IBA on them.",
"Therefore the loss function for the example with QRA is: L qr = L (cid:48) b + L (cid:48) ib , (6) where L (cid:48) b and L (cid:48) ib can be obtained by Eq.",
"3 and Eq.",
"5 by only change q i to q (cid:48) i .",
"We experiment on two tasks, including code question answering and natural language code search.",
"We report model comparisons and give detailed analyses from different perspectives.",
"We train the models on the CoSQA dataset and evaluate them on two tasks: code question answering and code search.",
"On code question answering, we randomly split CoSQA into 20,000 training and 604 validation examples.",
"As for the test set, we directly use the WebQueryTest in CodeXGLUE benchmark, which is a testing set of Python code question answering with 1,046 query-code pairs and their expert annotations.",
"On code search, we randomly divide the CoSQA into training, validation, and test sets in the number of 19604:500:500, and restrict the instances for validation and testing are all positive.",
"We fix a code database with 6,267 different codes in CoSQA.",
"Baseline Methods CoSQA is a new dataset, and there are no previous models designed specifically for it.",
"Hence, we simply choose RoBERTa-base (Liu et al., 2019b) and CodeBERT (Feng et al., 2020) as the baseline methods.",
"The baseline methods are trained on CodeSearchNet Python corpus with balanced positive examples.",
"Negative samples consist of a balanced number of instances with randomly replaced code.",
"Evaluation Metric We use accuracy as the evaluation metric on code question answering and Mean Reciprocal Rank (MRR) on code search.",
"Implementation Details We initialize CoCLR with microsoft/codebert-base 4 repretrained on CodeSearchNet Python Corpus (Husain et al., 2019).",
"We use the AdamW optimizer (Loshchilov and Hutter, 2019) and set the batch size to 32 on the two tasks.",
"On code question answering, we set the learning rate to 1e-5, warm-up rate to 0.1.",
"On code search, we set the learning rate to 1e-6.",
"All hyper-parameters are tuned to the best on the validation set.",
"All experiments are performed on an NVIDIA Tesla V100 GPU with 16GB memory.",
"Table 5 shows the experimental results on the tasks of code question answering and code search.",
"We can observe that: (1) By leveraging the CoSQA dataset, siamese network with CodeBERT achieves overall performance enhancement on two tasks, especially for CodeXGLUE WebQueryTest, which is an open challenge but without direct training data.",
"The result demonstrates the high-quality of CoSQA and its potential to be the training set of WebQueryTest.",
"(2) By integrating the code contrastive learning method, siamese network with CodeBERT further achieves significant performance gain on both tasks.",
"Especially on the task of WebQueryTest, CoCLR achieves the new state-of-the-art result by increasing 15.6%, which shows the effectiveness of our proposed approach.",
"To investigate the effects of CoCLR in query-code matching, we perform ablation study to analyze the major components in our contrastive loss that are of importance to help achieve good performance.",
"We conduct experiments on the CoSQA code search task, using the following settings:",
"(i) fine-tuning with vanilla binary cross-entropy loss only,",
"(ii) fine-tuning with additional in-batch augmentation (IBA) loss,",
"(iii) fine-tuning with additional query-rewritten augmentation (QRA) loss, (vi) fine-tuning with both additional IBA and QRA loss.",
"And for QRA loss, we also test the three rewriting methods when applied individually.",
"The results are listed in Table 6.",
"We can find that: 4 https://github.com/microsoft/CodeBERT Model Data Code Question Answering Code Search RoBERTa 2 CSN 40.34 0.18 CodeBERT 2 CSN 47.80 51.29 CodeBERT CSN + CoSQA 52.87 54.41 CodeBERT + CoCLR CSN + CoSQA 63.38 64.66 Table 5: Evaluation on code question answering and code search.",
"(1) Both incorporating IBA and QRA individually or together improve models' performance.",
"This indicates the advantage of applying code contrastive learning for code search.",
"(2) No matter integrating IBA or not, the model with QRA by switching method performs better than models with the other two methods.",
"We attribute the phenomenon to the fact that web queries do not necessarily have accurate grammar.",
"So switching the positions of two words in the query better maximizes the agreement between the positive example and the pseudo positive example than the other two augmentations, which augments better examples to learn representations.",
"(3) Comparing the two augmentations, adding IBA achieves more performance gain than QRA (1.25% versus 9.10%).",
"As the numbers of examples with QRA and examples with IBA are not equal under two settings, we further evaluate the model with only one more example with IBA.",
"The MRR is 55.52%, which is comparable to the performance of adding one more example with QRA.",
"This suggests that there may be no difference between adding examples with IBA or examples with QRA.",
"Instead, the number of high-quality examples is important for training.",
"Similar findings are also reported in Sun et al. (2020), and a theoretical analysis is provided in Arora et al. (2019).",
"To explore the effects of different components of code in query-code matching, we evaluate CoCLR on code search and process the codebase by the following operations:",
"(i) removing the function header,",
"(ii) removing the natural language documentation,",
"(iii) removing the code statements in the function body.",
"We also combine two of the above operations to see the performance.",
"From the results exhibited in Table 7, we can find that: by removing code component, the result of removing documentation drops more than those of removing header and removing function body.",
"This demonstrates the importance of natural language documentation in code search.",
"Since documentation shares the same modality with the query and briefly describes the functionality of the code, it may be more semantically related to the query.",
"Besides, it also reveals the importance of using web queries rather than treating documentation as queries in code search datasets, which liberates models from the matching between documentation with code to the matching between query with documentation and code.",
"In this paper, we focus on the matching problem of the web query and code.",
"We develop a large-scale human-annotated query-code matching dataset CoSQA, which contains 20,604 pairs of real-world web queries and Python functions with documentation.",
"We demonstrate that CoSQA is an ideal dataset for code question answering and code search.",
"We also propose a novel code contrastive learning method, named CoCLR, to incorporate artificially generated instances into training.",
"We find that model with CoCLR outperforms the baseline models on code search and code question answering tasks.",
"We perform detailed analysis to investigate the effects of CoCLR components and code components in query-code matching.",
"We believe our annotated CoSQA dataset will be useful for other tasks that involve aligned text and code, such as code summarization and code synthesis.",
"We thank all anonymous reviewers for their useful comments.",
"We also thank Zenan Xu, Daya Guo, Shuai Lu, Wanjun Zhong and Siyuan Wang for valuable discussions and feedback during the paper writing process."
] | [
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"result",
"objective",
"method",
"other",
"other"
] |